00:00:00.000 Started by upstream project "autotest-per-patch" build number 132706 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.016 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.631 The recommended git tool is: git 00:00:00.631 using credential 00000000-0000-0000-0000-000000000002 00:00:00.634 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.645 Fetching changes from the remote Git repository 00:00:00.649 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.659 Using shallow fetch with depth 1 00:00:00.659 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.659 > git --version # timeout=10 00:00:00.670 > git --version # 'git version 2.39.2' 00:00:00.670 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.689 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.689 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.813 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.825 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.835 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.835 > git config core.sparsecheckout # timeout=10 00:00:06.846 > git read-tree -mu HEAD # timeout=10 00:00:06.861 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.882 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.882 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.957 [Pipeline] Start of Pipeline 00:00:06.970 [Pipeline] library 00:00:06.972 Loading library shm_lib@master 00:00:06.972 Library shm_lib@master is cached. Copying from home. 00:00:06.992 [Pipeline] node 00:00:06.998 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.001 [Pipeline] { 00:00:07.011 [Pipeline] catchError 00:00:07.013 [Pipeline] { 00:00:07.026 [Pipeline] wrap 00:00:07.035 [Pipeline] { 00:00:07.044 [Pipeline] stage 00:00:07.046 [Pipeline] { (Prologue) 00:00:07.295 [Pipeline] sh 00:00:07.616 + logger -p user.info -t JENKINS-CI 00:00:07.631 [Pipeline] echo 00:00:07.632 Node: WFP6 00:00:07.639 [Pipeline] sh 00:00:07.935 [Pipeline] setCustomBuildProperty 00:00:07.946 [Pipeline] echo 00:00:07.947 Cleanup processes 00:00:07.951 [Pipeline] sh 00:00:08.231 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.231 1032199 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.245 [Pipeline] sh 00:00:08.556 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.557 ++ grep -v 'sudo pgrep' 00:00:08.557 ++ awk '{print $1}' 00:00:08.557 + sudo kill -9 00:00:08.557 + true 00:00:08.573 [Pipeline] cleanWs 00:00:08.584 [WS-CLEANUP] Deleting project workspace... 00:00:08.584 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.591 [WS-CLEANUP] done 00:00:08.597 [Pipeline] setCustomBuildProperty 00:00:08.613 [Pipeline] sh 00:00:08.896 + sudo git config --global --replace-all safe.directory '*' 00:00:09.001 [Pipeline] httpRequest 00:00:09.352 [Pipeline] echo 00:00:09.353 Sorcerer 10.211.164.20 is alive 00:00:09.363 [Pipeline] retry 00:00:09.365 [Pipeline] { 00:00:09.378 [Pipeline] httpRequest 00:00:09.382 HttpMethod: GET 00:00:09.383 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.383 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.409 Response Code: HTTP/1.1 200 OK 00:00:09.410 Success: Status code 200 is in the accepted range: 200,404 00:00:09.410 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:32.680 [Pipeline] } 00:00:32.698 [Pipeline] // retry 00:00:32.706 [Pipeline] sh 00:00:32.991 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:33.004 [Pipeline] httpRequest 00:00:33.615 [Pipeline] echo 00:00:33.617 Sorcerer 10.211.164.20 is alive 00:00:33.629 [Pipeline] retry 00:00:33.631 [Pipeline] { 00:00:33.648 [Pipeline] httpRequest 00:00:33.653 HttpMethod: GET 00:00:33.653 URL: http://10.211.164.20/packages/spdk_2b8672176e285641762e474fa00f272958e36a22.tar.gz 00:00:33.654 Sending request to url: http://10.211.164.20/packages/spdk_2b8672176e285641762e474fa00f272958e36a22.tar.gz 00:00:33.662 Response Code: HTTP/1.1 200 OK 00:00:33.662 Success: Status code 200 is in the accepted range: 200,404 00:00:33.662 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_2b8672176e285641762e474fa00f272958e36a22.tar.gz 00:02:20.385 [Pipeline] } 00:02:20.403 [Pipeline] // retry 00:02:20.411 [Pipeline] sh 00:02:20.731 + tar --no-same-owner -xf spdk_2b8672176e285641762e474fa00f272958e36a22.tar.gz 00:02:23.280 [Pipeline] sh 00:02:23.565 + git -C spdk log --oneline -n5 00:02:23.565 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:02:23.565 e2dfdf06c accel/mlx5: Register post_poller handler 00:02:23.565 3c8001115 accel/mlx5: More precise condition to update DB 00:02:23.565 98eca6fa0 lib/thread: Add API to register a post poller handler 00:02:23.565 2c140f58f nvme/rdma: Support accel sequence 00:02:23.576 [Pipeline] } 00:02:23.590 [Pipeline] // stage 00:02:23.601 [Pipeline] stage 00:02:23.605 [Pipeline] { (Prepare) 00:02:23.624 [Pipeline] writeFile 00:02:23.640 [Pipeline] sh 00:02:23.924 + logger -p user.info -t JENKINS-CI 00:02:23.936 [Pipeline] sh 00:02:24.221 + logger -p user.info -t JENKINS-CI 00:02:24.235 [Pipeline] sh 00:02:24.525 + cat autorun-spdk.conf 00:02:24.525 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:24.525 SPDK_TEST_NVMF=1 00:02:24.525 SPDK_TEST_NVME_CLI=1 00:02:24.525 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:24.525 SPDK_TEST_NVMF_NICS=e810 00:02:24.525 SPDK_TEST_VFIOUSER=1 00:02:24.525 SPDK_RUN_UBSAN=1 00:02:24.525 NET_TYPE=phy 00:02:24.532 RUN_NIGHTLY=0 00:02:24.538 [Pipeline] readFile 00:02:24.571 [Pipeline] withEnv 00:02:24.574 [Pipeline] { 00:02:24.588 [Pipeline] sh 00:02:24.873 + set -ex 00:02:24.873 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:24.873 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:24.873 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:24.873 ++ SPDK_TEST_NVMF=1 00:02:24.873 ++ SPDK_TEST_NVME_CLI=1 00:02:24.873 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:24.873 ++ SPDK_TEST_NVMF_NICS=e810 00:02:24.873 ++ SPDK_TEST_VFIOUSER=1 00:02:24.873 ++ SPDK_RUN_UBSAN=1 00:02:24.873 ++ NET_TYPE=phy 00:02:24.873 ++ RUN_NIGHTLY=0 00:02:24.873 + case $SPDK_TEST_NVMF_NICS in 00:02:24.873 + DRIVERS=ice 00:02:24.873 + [[ tcp == \r\d\m\a ]] 00:02:24.873 + [[ -n ice ]] 00:02:24.873 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:24.873 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:24.873 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:24.873 rmmod: ERROR: Module irdma is not currently loaded 00:02:24.873 rmmod: ERROR: Module i40iw is not currently loaded 00:02:24.873 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:24.873 + true 00:02:24.873 + for D in $DRIVERS 00:02:24.873 + sudo modprobe ice 00:02:24.873 + exit 0 00:02:24.882 [Pipeline] } 00:02:24.899 [Pipeline] // withEnv 00:02:24.905 [Pipeline] } 00:02:24.921 [Pipeline] // stage 00:02:24.931 [Pipeline] catchError 00:02:24.933 [Pipeline] { 00:02:24.947 [Pipeline] timeout 00:02:24.947 Timeout set to expire in 1 hr 0 min 00:02:24.949 [Pipeline] { 00:02:24.962 [Pipeline] stage 00:02:24.964 [Pipeline] { (Tests) 00:02:24.979 [Pipeline] sh 00:02:25.264 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:25.264 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:25.264 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:25.264 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:25.264 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.264 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:25.264 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:25.264 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:25.264 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:25.264 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:25.264 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:25.264 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:25.264 + source /etc/os-release 00:02:25.264 ++ NAME='Fedora Linux' 00:02:25.264 ++ VERSION='39 (Cloud Edition)' 00:02:25.264 ++ ID=fedora 00:02:25.264 ++ VERSION_ID=39 00:02:25.264 ++ VERSION_CODENAME= 00:02:25.264 ++ PLATFORM_ID=platform:f39 00:02:25.264 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:25.264 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:25.264 ++ LOGO=fedora-logo-icon 00:02:25.264 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:25.264 ++ HOME_URL=https://fedoraproject.org/ 00:02:25.264 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:25.264 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:25.264 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:25.264 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:25.264 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:25.264 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:25.264 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:25.264 ++ SUPPORT_END=2024-11-12 00:02:25.264 ++ VARIANT='Cloud Edition' 00:02:25.264 ++ VARIANT_ID=cloud 00:02:25.264 + uname -a 00:02:25.264 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:25.264 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:27.801 Hugepages 00:02:27.801 node hugesize free / total 00:02:27.801 node0 1048576kB 0 / 0 00:02:27.801 node0 2048kB 0 / 0 00:02:27.801 node1 1048576kB 0 / 0 00:02:27.801 node1 2048kB 0 / 0 00:02:27.801 00:02:27.801 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:27.801 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:27.801 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:27.801 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:27.801 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:27.801 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:27.801 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:27.801 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:27.801 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:27.801 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:27.801 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:27.801 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:27.801 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:27.801 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:27.801 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:27.801 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:27.801 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:27.801 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:27.801 + rm -f /tmp/spdk-ld-path 00:02:27.801 + source autorun-spdk.conf 00:02:27.801 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:27.801 ++ SPDK_TEST_NVMF=1 00:02:27.801 ++ SPDK_TEST_NVME_CLI=1 00:02:27.801 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:27.801 ++ SPDK_TEST_NVMF_NICS=e810 00:02:27.801 ++ SPDK_TEST_VFIOUSER=1 00:02:27.801 ++ SPDK_RUN_UBSAN=1 00:02:27.801 ++ NET_TYPE=phy 00:02:27.801 ++ RUN_NIGHTLY=0 00:02:27.801 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:27.801 + [[ -n '' ]] 00:02:27.801 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:27.801 + for M in /var/spdk/build-*-manifest.txt 00:02:27.801 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:27.801 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:27.801 + for M in /var/spdk/build-*-manifest.txt 00:02:27.801 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:27.801 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:27.801 + for M in /var/spdk/build-*-manifest.txt 00:02:27.801 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:27.801 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:27.801 ++ uname 00:02:27.801 + [[ Linux == \L\i\n\u\x ]] 00:02:27.801 + sudo dmesg -T 00:02:28.060 + sudo dmesg --clear 00:02:28.060 + dmesg_pid=1033661 00:02:28.060 + [[ Fedora Linux == FreeBSD ]] 00:02:28.060 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:28.060 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:28.060 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:28.060 + [[ -x /usr/src/fio-static/fio ]] 00:02:28.060 + sudo dmesg -Tw 00:02:28.060 + export FIO_BIN=/usr/src/fio-static/fio 00:02:28.060 + FIO_BIN=/usr/src/fio-static/fio 00:02:28.060 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:28.060 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:28.060 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:28.060 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:28.060 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:28.060 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:28.060 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:28.060 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:28.060 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:28.060 20:55:36 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:28.060 20:55:36 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:28.060 20:55:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:28.060 20:55:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:28.060 20:55:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:28.060 20:55:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:28.060 20:55:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:28.060 20:55:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:28.060 20:55:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:28.060 20:55:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:28.060 20:55:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:28.060 20:55:36 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:28.060 20:55:36 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:28.060 20:55:36 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:28.060 20:55:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:28.060 20:55:36 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:28.060 20:55:36 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:28.060 20:55:36 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:28.060 20:55:36 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:28.060 20:55:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.060 20:55:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.060 20:55:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.060 20:55:36 -- paths/export.sh@5 -- $ export PATH 00:02:28.060 20:55:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.060 20:55:36 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:28.060 20:55:36 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:28.060 20:55:36 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733428536.XXXXXX 00:02:28.060 20:55:36 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733428536.MZR7tK 00:02:28.060 20:55:36 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:28.060 20:55:36 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:28.060 20:55:36 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:28.060 20:55:36 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:28.060 20:55:36 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:28.060 20:55:36 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:28.060 20:55:36 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:28.060 20:55:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.060 20:55:36 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:28.060 20:55:36 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:28.060 20:55:36 -- pm/common@17 -- $ local monitor 00:02:28.061 20:55:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.061 20:55:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.061 20:55:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.061 20:55:36 -- pm/common@21 -- $ date +%s 00:02:28.061 20:55:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.061 20:55:36 -- pm/common@21 -- $ date +%s 00:02:28.061 20:55:36 -- pm/common@25 -- $ sleep 1 00:02:28.061 20:55:36 -- pm/common@21 -- $ date +%s 00:02:28.061 20:55:36 -- pm/common@21 -- $ date +%s 00:02:28.061 20:55:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733428536 00:02:28.061 20:55:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733428536 00:02:28.061 20:55:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733428536 00:02:28.061 20:55:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733428536 00:02:28.319 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733428536_collect-cpu-load.pm.log 00:02:28.319 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733428536_collect-vmstat.pm.log 00:02:28.319 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733428536_collect-cpu-temp.pm.log 00:02:28.319 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733428536_collect-bmc-pm.bmc.pm.log 00:02:29.257 20:55:37 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:29.257 20:55:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:29.257 20:55:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:29.257 20:55:37 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:29.257 20:55:37 -- spdk/autobuild.sh@16 -- $ date -u 00:02:29.257 Thu Dec 5 07:55:37 PM UTC 2024 00:02:29.257 20:55:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:29.257 v25.01-pre-301-g2b8672176 00:02:29.257 20:55:37 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:29.257 20:55:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:29.257 20:55:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:29.257 20:55:37 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:29.257 20:55:37 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:29.257 20:55:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.257 ************************************ 00:02:29.257 START TEST ubsan 00:02:29.257 ************************************ 00:02:29.257 20:55:37 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:29.257 using ubsan 00:02:29.257 00:02:29.257 real 0m0.000s 00:02:29.257 user 0m0.000s 00:02:29.257 sys 0m0.000s 00:02:29.257 20:55:37 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:29.257 20:55:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:29.257 ************************************ 00:02:29.257 END TEST ubsan 00:02:29.257 ************************************ 00:02:29.257 20:55:37 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:29.257 20:55:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:29.257 20:55:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:29.257 20:55:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:29.257 20:55:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:29.257 20:55:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:29.257 20:55:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:29.257 20:55:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:29.257 20:55:37 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:29.517 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:29.517 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:29.776 Using 'verbs' RDMA provider 00:02:42.958 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:55.204 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:55.204 Creating mk/config.mk...done. 00:02:55.204 Creating mk/cc.flags.mk...done. 00:02:55.204 Type 'make' to build. 00:02:55.204 20:56:02 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:55.204 20:56:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:55.204 20:56:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:55.204 20:56:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.204 ************************************ 00:02:55.204 START TEST make 00:02:55.204 ************************************ 00:02:55.204 20:56:02 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:55.204 make[1]: Nothing to be done for 'all'. 00:02:56.588 The Meson build system 00:02:56.588 Version: 1.5.0 00:02:56.588 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:56.588 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:56.588 Build type: native build 00:02:56.588 Project name: libvfio-user 00:02:56.588 Project version: 0.0.1 00:02:56.588 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:56.588 C linker for the host machine: cc ld.bfd 2.40-14 00:02:56.588 Host machine cpu family: x86_64 00:02:56.588 Host machine cpu: x86_64 00:02:56.588 Run-time dependency threads found: YES 00:02:56.588 Library dl found: YES 00:02:56.588 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:56.588 Run-time dependency json-c found: YES 0.17 00:02:56.588 Run-time dependency cmocka found: YES 1.1.7 00:02:56.588 Program pytest-3 found: NO 00:02:56.588 Program flake8 found: NO 00:02:56.588 Program misspell-fixer found: NO 00:02:56.588 Program restructuredtext-lint found: NO 00:02:56.588 Program valgrind found: YES (/usr/bin/valgrind) 00:02:56.588 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:56.588 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:56.588 Compiler for C supports arguments -Wwrite-strings: YES 00:02:56.588 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:56.588 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:56.588 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:56.588 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:56.588 Build targets in project: 8 00:02:56.588 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:56.588 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:56.588 00:02:56.588 libvfio-user 0.0.1 00:02:56.588 00:02:56.588 User defined options 00:02:56.588 buildtype : debug 00:02:56.588 default_library: shared 00:02:56.588 libdir : /usr/local/lib 00:02:56.588 00:02:56.588 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:57.152 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:57.152 [1/37] Compiling C object samples/null.p/null.c.o 00:02:57.152 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:57.152 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:57.152 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:57.152 [5/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:57.152 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:57.152 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:57.152 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:57.152 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:57.152 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:57.152 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:57.152 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:57.152 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:57.152 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:57.152 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:57.152 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:57.152 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:57.152 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:57.152 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:57.152 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:57.152 [21/37] Compiling C object samples/server.p/server.c.o 00:02:57.152 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:57.152 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:57.152 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:57.152 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:57.152 [26/37] Compiling C object samples/client.p/client.c.o 00:02:57.152 [27/37] Linking target samples/client 00:02:57.152 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:57.409 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:57.409 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:57.409 [31/37] Linking target test/unit_tests 00:02:57.409 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:57.409 [33/37] Linking target samples/lspci 00:02:57.409 [34/37] Linking target samples/gpio-pci-idio-16 00:02:57.409 [35/37] Linking target samples/server 00:02:57.409 [36/37] Linking target samples/null 00:02:57.409 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:57.409 INFO: autodetecting backend as ninja 00:02:57.409 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:57.666 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:57.924 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:57.924 ninja: no work to do. 00:03:03.211 The Meson build system 00:03:03.211 Version: 1.5.0 00:03:03.211 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:03.211 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:03.211 Build type: native build 00:03:03.211 Program cat found: YES (/usr/bin/cat) 00:03:03.211 Project name: DPDK 00:03:03.211 Project version: 24.03.0 00:03:03.211 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:03.211 C linker for the host machine: cc ld.bfd 2.40-14 00:03:03.211 Host machine cpu family: x86_64 00:03:03.211 Host machine cpu: x86_64 00:03:03.211 Message: ## Building in Developer Mode ## 00:03:03.211 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:03.211 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:03.211 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:03.211 Program python3 found: YES (/usr/bin/python3) 00:03:03.211 Program cat found: YES (/usr/bin/cat) 00:03:03.211 Compiler for C supports arguments -march=native: YES 00:03:03.211 Checking for size of "void *" : 8 00:03:03.211 Checking for size of "void *" : 8 (cached) 00:03:03.211 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:03.212 Library m found: YES 00:03:03.212 Library numa found: YES 00:03:03.212 Has header "numaif.h" : YES 00:03:03.212 Library fdt found: NO 00:03:03.212 Library execinfo found: NO 00:03:03.212 Has header "execinfo.h" : YES 00:03:03.212 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:03.212 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:03.212 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:03.212 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:03.212 Run-time dependency openssl found: YES 3.1.1 00:03:03.212 Run-time dependency libpcap found: YES 1.10.4 00:03:03.212 Has header "pcap.h" with dependency libpcap: YES 00:03:03.212 Compiler for C supports arguments -Wcast-qual: YES 00:03:03.212 Compiler for C supports arguments -Wdeprecated: YES 00:03:03.212 Compiler for C supports arguments -Wformat: YES 00:03:03.212 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:03.212 Compiler for C supports arguments -Wformat-security: NO 00:03:03.212 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:03.212 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:03.212 Compiler for C supports arguments -Wnested-externs: YES 00:03:03.212 Compiler for C supports arguments -Wold-style-definition: YES 00:03:03.212 Compiler for C supports arguments -Wpointer-arith: YES 00:03:03.212 Compiler for C supports arguments -Wsign-compare: YES 00:03:03.212 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:03.212 Compiler for C supports arguments -Wundef: YES 00:03:03.212 Compiler for C supports arguments -Wwrite-strings: YES 00:03:03.212 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:03.212 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:03.212 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:03.212 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:03.212 Program objdump found: YES (/usr/bin/objdump) 00:03:03.212 Compiler for C supports arguments -mavx512f: YES 00:03:03.212 Checking if "AVX512 checking" compiles: YES 00:03:03.212 Fetching value of define "__SSE4_2__" : 1 00:03:03.212 Fetching value of define "__AES__" : 1 00:03:03.212 Fetching value of define "__AVX__" : 1 00:03:03.212 Fetching value of define "__AVX2__" : 1 00:03:03.212 Fetching value of define "__AVX512BW__" : 1 00:03:03.212 Fetching value of define "__AVX512CD__" : 1 00:03:03.212 Fetching value of define "__AVX512DQ__" : 1 00:03:03.212 Fetching value of define "__AVX512F__" : 1 00:03:03.212 Fetching value of define "__AVX512VL__" : 1 00:03:03.212 Fetching value of define "__PCLMUL__" : 1 00:03:03.212 Fetching value of define "__RDRND__" : 1 00:03:03.212 Fetching value of define "__RDSEED__" : 1 00:03:03.212 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:03.212 Fetching value of define "__znver1__" : (undefined) 00:03:03.212 Fetching value of define "__znver2__" : (undefined) 00:03:03.212 Fetching value of define "__znver3__" : (undefined) 00:03:03.212 Fetching value of define "__znver4__" : (undefined) 00:03:03.212 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:03.212 Message: lib/log: Defining dependency "log" 00:03:03.212 Message: lib/kvargs: Defining dependency "kvargs" 00:03:03.212 Message: lib/telemetry: Defining dependency "telemetry" 00:03:03.212 Checking for function "getentropy" : NO 00:03:03.212 Message: lib/eal: Defining dependency "eal" 00:03:03.212 Message: lib/ring: Defining dependency "ring" 00:03:03.212 Message: lib/rcu: Defining dependency "rcu" 00:03:03.212 Message: lib/mempool: Defining dependency "mempool" 00:03:03.212 Message: lib/mbuf: Defining dependency "mbuf" 00:03:03.212 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:03.212 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:03.212 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:03.212 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:03.212 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:03.212 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:03.212 Compiler for C supports arguments -mpclmul: YES 00:03:03.212 Compiler for C supports arguments -maes: YES 00:03:03.212 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:03.212 Compiler for C supports arguments -mavx512bw: YES 00:03:03.212 Compiler for C supports arguments -mavx512dq: YES 00:03:03.212 Compiler for C supports arguments -mavx512vl: YES 00:03:03.212 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:03.212 Compiler for C supports arguments -mavx2: YES 00:03:03.212 Compiler for C supports arguments -mavx: YES 00:03:03.212 Message: lib/net: Defining dependency "net" 00:03:03.212 Message: lib/meter: Defining dependency "meter" 00:03:03.212 Message: lib/ethdev: Defining dependency "ethdev" 00:03:03.212 Message: lib/pci: Defining dependency "pci" 00:03:03.212 Message: lib/cmdline: Defining dependency "cmdline" 00:03:03.212 Message: lib/hash: Defining dependency "hash" 00:03:03.212 Message: lib/timer: Defining dependency "timer" 00:03:03.212 Message: lib/compressdev: Defining dependency "compressdev" 00:03:03.212 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:03.212 Message: lib/dmadev: Defining dependency "dmadev" 00:03:03.212 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:03.212 Message: lib/power: Defining dependency "power" 00:03:03.212 Message: lib/reorder: Defining dependency "reorder" 00:03:03.212 Message: lib/security: Defining dependency "security" 00:03:03.212 Has header "linux/userfaultfd.h" : YES 00:03:03.212 Has header "linux/vduse.h" : YES 00:03:03.212 Message: lib/vhost: Defining dependency "vhost" 00:03:03.212 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:03.212 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:03.212 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:03.212 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:03.212 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:03.212 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:03.212 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:03.212 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:03.212 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:03.212 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:03.212 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:03.212 Configuring doxy-api-html.conf using configuration 00:03:03.212 Configuring doxy-api-man.conf using configuration 00:03:03.212 Program mandb found: YES (/usr/bin/mandb) 00:03:03.212 Program sphinx-build found: NO 00:03:03.212 Configuring rte_build_config.h using configuration 00:03:03.212 Message: 00:03:03.212 ================= 00:03:03.212 Applications Enabled 00:03:03.212 ================= 00:03:03.212 00:03:03.212 apps: 00:03:03.212 00:03:03.212 00:03:03.212 Message: 00:03:03.212 ================= 00:03:03.212 Libraries Enabled 00:03:03.212 ================= 00:03:03.212 00:03:03.212 libs: 00:03:03.212 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:03.212 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:03.212 cryptodev, dmadev, power, reorder, security, vhost, 00:03:03.212 00:03:03.212 Message: 00:03:03.212 =============== 00:03:03.212 Drivers Enabled 00:03:03.212 =============== 00:03:03.212 00:03:03.212 common: 00:03:03.212 00:03:03.212 bus: 00:03:03.212 pci, vdev, 00:03:03.212 mempool: 00:03:03.212 ring, 00:03:03.212 dma: 00:03:03.212 00:03:03.212 net: 00:03:03.212 00:03:03.212 crypto: 00:03:03.212 00:03:03.212 compress: 00:03:03.212 00:03:03.212 vdpa: 00:03:03.212 00:03:03.212 00:03:03.212 Message: 00:03:03.212 ================= 00:03:03.212 Content Skipped 00:03:03.212 ================= 00:03:03.212 00:03:03.212 apps: 00:03:03.212 dumpcap: explicitly disabled via build config 00:03:03.212 graph: explicitly disabled via build config 00:03:03.212 pdump: explicitly disabled via build config 00:03:03.212 proc-info: explicitly disabled via build config 00:03:03.212 test-acl: explicitly disabled via build config 00:03:03.212 test-bbdev: explicitly disabled via build config 00:03:03.212 test-cmdline: explicitly disabled via build config 00:03:03.212 test-compress-perf: explicitly disabled via build config 00:03:03.212 test-crypto-perf: explicitly disabled via build config 00:03:03.212 test-dma-perf: explicitly disabled via build config 00:03:03.212 test-eventdev: explicitly disabled via build config 00:03:03.212 test-fib: explicitly disabled via build config 00:03:03.212 test-flow-perf: explicitly disabled via build config 00:03:03.212 test-gpudev: explicitly disabled via build config 00:03:03.212 test-mldev: explicitly disabled via build config 00:03:03.212 test-pipeline: explicitly disabled via build config 00:03:03.212 test-pmd: explicitly disabled via build config 00:03:03.212 test-regex: explicitly disabled via build config 00:03:03.212 test-sad: explicitly disabled via build config 00:03:03.212 test-security-perf: explicitly disabled via build config 00:03:03.212 00:03:03.212 libs: 00:03:03.212 argparse: explicitly disabled via build config 00:03:03.212 metrics: explicitly disabled via build config 00:03:03.212 acl: explicitly disabled via build config 00:03:03.212 bbdev: explicitly disabled via build config 00:03:03.212 bitratestats: explicitly disabled via build config 00:03:03.212 bpf: explicitly disabled via build config 00:03:03.212 cfgfile: explicitly disabled via build config 00:03:03.212 distributor: explicitly disabled via build config 00:03:03.212 efd: explicitly disabled via build config 00:03:03.212 eventdev: explicitly disabled via build config 00:03:03.212 dispatcher: explicitly disabled via build config 00:03:03.212 gpudev: explicitly disabled via build config 00:03:03.212 gro: explicitly disabled via build config 00:03:03.212 gso: explicitly disabled via build config 00:03:03.212 ip_frag: explicitly disabled via build config 00:03:03.212 jobstats: explicitly disabled via build config 00:03:03.212 latencystats: explicitly disabled via build config 00:03:03.212 lpm: explicitly disabled via build config 00:03:03.212 member: explicitly disabled via build config 00:03:03.212 pcapng: explicitly disabled via build config 00:03:03.212 rawdev: explicitly disabled via build config 00:03:03.212 regexdev: explicitly disabled via build config 00:03:03.213 mldev: explicitly disabled via build config 00:03:03.213 rib: explicitly disabled via build config 00:03:03.213 sched: explicitly disabled via build config 00:03:03.213 stack: explicitly disabled via build config 00:03:03.213 ipsec: explicitly disabled via build config 00:03:03.213 pdcp: explicitly disabled via build config 00:03:03.213 fib: explicitly disabled via build config 00:03:03.213 port: explicitly disabled via build config 00:03:03.213 pdump: explicitly disabled via build config 00:03:03.213 table: explicitly disabled via build config 00:03:03.213 pipeline: explicitly disabled via build config 00:03:03.213 graph: explicitly disabled via build config 00:03:03.213 node: explicitly disabled via build config 00:03:03.213 00:03:03.213 drivers: 00:03:03.213 common/cpt: not in enabled drivers build config 00:03:03.213 common/dpaax: not in enabled drivers build config 00:03:03.213 common/iavf: not in enabled drivers build config 00:03:03.213 common/idpf: not in enabled drivers build config 00:03:03.213 common/ionic: not in enabled drivers build config 00:03:03.213 common/mvep: not in enabled drivers build config 00:03:03.213 common/octeontx: not in enabled drivers build config 00:03:03.213 bus/auxiliary: not in enabled drivers build config 00:03:03.213 bus/cdx: not in enabled drivers build config 00:03:03.213 bus/dpaa: not in enabled drivers build config 00:03:03.213 bus/fslmc: not in enabled drivers build config 00:03:03.213 bus/ifpga: not in enabled drivers build config 00:03:03.213 bus/platform: not in enabled drivers build config 00:03:03.213 bus/uacce: not in enabled drivers build config 00:03:03.213 bus/vmbus: not in enabled drivers build config 00:03:03.213 common/cnxk: not in enabled drivers build config 00:03:03.213 common/mlx5: not in enabled drivers build config 00:03:03.213 common/nfp: not in enabled drivers build config 00:03:03.213 common/nitrox: not in enabled drivers build config 00:03:03.213 common/qat: not in enabled drivers build config 00:03:03.213 common/sfc_efx: not in enabled drivers build config 00:03:03.213 mempool/bucket: not in enabled drivers build config 00:03:03.213 mempool/cnxk: not in enabled drivers build config 00:03:03.213 mempool/dpaa: not in enabled drivers build config 00:03:03.213 mempool/dpaa2: not in enabled drivers build config 00:03:03.213 mempool/octeontx: not in enabled drivers build config 00:03:03.213 mempool/stack: not in enabled drivers build config 00:03:03.213 dma/cnxk: not in enabled drivers build config 00:03:03.213 dma/dpaa: not in enabled drivers build config 00:03:03.213 dma/dpaa2: not in enabled drivers build config 00:03:03.213 dma/hisilicon: not in enabled drivers build config 00:03:03.213 dma/idxd: not in enabled drivers build config 00:03:03.213 dma/ioat: not in enabled drivers build config 00:03:03.213 dma/skeleton: not in enabled drivers build config 00:03:03.213 net/af_packet: not in enabled drivers build config 00:03:03.213 net/af_xdp: not in enabled drivers build config 00:03:03.213 net/ark: not in enabled drivers build config 00:03:03.213 net/atlantic: not in enabled drivers build config 00:03:03.213 net/avp: not in enabled drivers build config 00:03:03.213 net/axgbe: not in enabled drivers build config 00:03:03.213 net/bnx2x: not in enabled drivers build config 00:03:03.213 net/bnxt: not in enabled drivers build config 00:03:03.213 net/bonding: not in enabled drivers build config 00:03:03.213 net/cnxk: not in enabled drivers build config 00:03:03.213 net/cpfl: not in enabled drivers build config 00:03:03.213 net/cxgbe: not in enabled drivers build config 00:03:03.213 net/dpaa: not in enabled drivers build config 00:03:03.213 net/dpaa2: not in enabled drivers build config 00:03:03.213 net/e1000: not in enabled drivers build config 00:03:03.213 net/ena: not in enabled drivers build config 00:03:03.213 net/enetc: not in enabled drivers build config 00:03:03.213 net/enetfec: not in enabled drivers build config 00:03:03.213 net/enic: not in enabled drivers build config 00:03:03.213 net/failsafe: not in enabled drivers build config 00:03:03.213 net/fm10k: not in enabled drivers build config 00:03:03.213 net/gve: not in enabled drivers build config 00:03:03.213 net/hinic: not in enabled drivers build config 00:03:03.213 net/hns3: not in enabled drivers build config 00:03:03.213 net/i40e: not in enabled drivers build config 00:03:03.213 net/iavf: not in enabled drivers build config 00:03:03.213 net/ice: not in enabled drivers build config 00:03:03.213 net/idpf: not in enabled drivers build config 00:03:03.213 net/igc: not in enabled drivers build config 00:03:03.213 net/ionic: not in enabled drivers build config 00:03:03.213 net/ipn3ke: not in enabled drivers build config 00:03:03.213 net/ixgbe: not in enabled drivers build config 00:03:03.213 net/mana: not in enabled drivers build config 00:03:03.213 net/memif: not in enabled drivers build config 00:03:03.213 net/mlx4: not in enabled drivers build config 00:03:03.213 net/mlx5: not in enabled drivers build config 00:03:03.213 net/mvneta: not in enabled drivers build config 00:03:03.213 net/mvpp2: not in enabled drivers build config 00:03:03.213 net/netvsc: not in enabled drivers build config 00:03:03.213 net/nfb: not in enabled drivers build config 00:03:03.213 net/nfp: not in enabled drivers build config 00:03:03.213 net/ngbe: not in enabled drivers build config 00:03:03.213 net/null: not in enabled drivers build config 00:03:03.213 net/octeontx: not in enabled drivers build config 00:03:03.213 net/octeon_ep: not in enabled drivers build config 00:03:03.213 net/pcap: not in enabled drivers build config 00:03:03.213 net/pfe: not in enabled drivers build config 00:03:03.213 net/qede: not in enabled drivers build config 00:03:03.213 net/ring: not in enabled drivers build config 00:03:03.213 net/sfc: not in enabled drivers build config 00:03:03.213 net/softnic: not in enabled drivers build config 00:03:03.213 net/tap: not in enabled drivers build config 00:03:03.213 net/thunderx: not in enabled drivers build config 00:03:03.213 net/txgbe: not in enabled drivers build config 00:03:03.213 net/vdev_netvsc: not in enabled drivers build config 00:03:03.213 net/vhost: not in enabled drivers build config 00:03:03.213 net/virtio: not in enabled drivers build config 00:03:03.213 net/vmxnet3: not in enabled drivers build config 00:03:03.213 raw/*: missing internal dependency, "rawdev" 00:03:03.213 crypto/armv8: not in enabled drivers build config 00:03:03.213 crypto/bcmfs: not in enabled drivers build config 00:03:03.213 crypto/caam_jr: not in enabled drivers build config 00:03:03.213 crypto/ccp: not in enabled drivers build config 00:03:03.213 crypto/cnxk: not in enabled drivers build config 00:03:03.213 crypto/dpaa_sec: not in enabled drivers build config 00:03:03.213 crypto/dpaa2_sec: not in enabled drivers build config 00:03:03.213 crypto/ipsec_mb: not in enabled drivers build config 00:03:03.213 crypto/mlx5: not in enabled drivers build config 00:03:03.213 crypto/mvsam: not in enabled drivers build config 00:03:03.213 crypto/nitrox: not in enabled drivers build config 00:03:03.213 crypto/null: not in enabled drivers build config 00:03:03.213 crypto/octeontx: not in enabled drivers build config 00:03:03.213 crypto/openssl: not in enabled drivers build config 00:03:03.213 crypto/scheduler: not in enabled drivers build config 00:03:03.213 crypto/uadk: not in enabled drivers build config 00:03:03.213 crypto/virtio: not in enabled drivers build config 00:03:03.213 compress/isal: not in enabled drivers build config 00:03:03.213 compress/mlx5: not in enabled drivers build config 00:03:03.213 compress/nitrox: not in enabled drivers build config 00:03:03.213 compress/octeontx: not in enabled drivers build config 00:03:03.213 compress/zlib: not in enabled drivers build config 00:03:03.213 regex/*: missing internal dependency, "regexdev" 00:03:03.213 ml/*: missing internal dependency, "mldev" 00:03:03.213 vdpa/ifc: not in enabled drivers build config 00:03:03.213 vdpa/mlx5: not in enabled drivers build config 00:03:03.213 vdpa/nfp: not in enabled drivers build config 00:03:03.213 vdpa/sfc: not in enabled drivers build config 00:03:03.213 event/*: missing internal dependency, "eventdev" 00:03:03.213 baseband/*: missing internal dependency, "bbdev" 00:03:03.213 gpu/*: missing internal dependency, "gpudev" 00:03:03.213 00:03:03.213 00:03:03.213 Build targets in project: 85 00:03:03.213 00:03:03.213 DPDK 24.03.0 00:03:03.213 00:03:03.213 User defined options 00:03:03.213 buildtype : debug 00:03:03.213 default_library : shared 00:03:03.213 libdir : lib 00:03:03.213 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:03.213 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:03.213 c_link_args : 00:03:03.213 cpu_instruction_set: native 00:03:03.213 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:03:03.213 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:03:03.213 enable_docs : false 00:03:03.213 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:03.213 enable_kmods : false 00:03:03.213 max_lcores : 128 00:03:03.213 tests : false 00:03:03.213 00:03:03.213 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:03.479 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:03.744 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:03.744 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:03.744 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:03.744 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:03.744 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:03.744 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:03.744 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:03.744 [8/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:03.744 [9/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:03.744 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:03.744 [11/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:03.744 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:03.744 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:03.744 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:03.744 [15/268] Linking static target lib/librte_log.a 00:03:03.744 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:03.744 [17/268] Linking static target lib/librte_kvargs.a 00:03:03.744 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:03.744 [19/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:04.025 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:04.025 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:04.025 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:04.025 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:04.025 [24/268] Linking static target lib/librte_pci.a 00:03:04.025 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:04.285 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:04.285 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:04.285 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:04.285 [29/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:04.285 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:04.285 [31/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:04.285 [32/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:04.285 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:04.285 [34/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:04.285 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:04.285 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:04.285 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:04.285 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:04.285 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:04.285 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:04.285 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:04.285 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:04.285 [43/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:04.285 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:04.285 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:04.285 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:04.285 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:04.285 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:04.285 [49/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:04.285 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:04.285 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:04.285 [52/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:04.285 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:04.285 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:04.285 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:04.285 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:04.285 [57/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:04.285 [58/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:04.285 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:04.285 [60/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:04.285 [61/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:04.285 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:04.285 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:04.285 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:04.285 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:04.285 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:04.285 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:04.285 [68/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:04.285 [69/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:04.285 [70/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:04.285 [71/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:04.285 [72/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:04.285 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:04.285 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:04.285 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:04.285 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:04.285 [77/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:04.285 [78/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:04.285 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:04.285 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:04.285 [81/268] Linking static target lib/librte_telemetry.a 00:03:04.285 [82/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:04.285 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:04.285 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:04.285 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:04.285 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:04.285 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:04.285 [88/268] Linking static target lib/librte_meter.a 00:03:04.285 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:04.285 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:04.285 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:04.285 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:04.285 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:04.285 [94/268] Linking static target lib/librte_ring.a 00:03:04.542 [95/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:04.542 [96/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.542 [97/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:04.542 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:04.542 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:04.542 [100/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:04.542 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:04.542 [102/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:04.542 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:04.542 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:04.542 [105/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:04.542 [106/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:04.542 [107/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:04.542 [108/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.542 [109/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:04.542 [110/268] Linking static target lib/librte_mempool.a 00:03:04.542 [111/268] Linking static target lib/librte_rcu.a 00:03:04.542 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:04.542 [113/268] Linking static target lib/librte_net.a 00:03:04.542 [114/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:04.542 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:04.542 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:04.542 [117/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:04.542 [118/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:04.542 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:04.542 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:04.542 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:04.542 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:04.542 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:04.542 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:04.542 [125/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:04.542 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:04.542 [127/268] Linking static target lib/librte_eal.a 00:03:04.542 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:04.542 [129/268] Linking static target lib/librte_cmdline.a 00:03:04.542 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:04.542 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:04.542 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:04.542 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.542 [134/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:04.542 [135/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.800 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:04.800 [137/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:04.800 [138/268] Linking target lib/librte_log.so.24.1 00:03:04.800 [139/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:04.800 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:04.800 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:04.800 [142/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.800 [143/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:04.800 [144/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:04.800 [145/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:04.800 [146/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:04.800 [147/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.800 [148/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:04.800 [149/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:04.800 [150/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:04.800 [151/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.800 [152/268] Linking static target lib/librte_mbuf.a 00:03:04.800 [153/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.800 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:04.800 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:04.800 [156/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:04.800 [157/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:04.800 [158/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:04.800 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:04.800 [160/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:04.800 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:04.800 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:04.800 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:04.800 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:04.800 [165/268] Linking static target lib/librte_timer.a 00:03:04.800 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:04.800 [167/268] Linking target lib/librte_telemetry.so.24.1 00:03:04.800 [168/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:04.800 [169/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:04.800 [170/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:04.800 [171/268] Linking target lib/librte_kvargs.so.24.1 00:03:04.800 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:04.800 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:04.800 [174/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:04.800 [175/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:04.800 [176/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:05.057 [177/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:05.057 [178/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:05.057 [179/268] Linking static target lib/librte_reorder.a 00:03:05.057 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:05.057 [181/268] Linking static target lib/librte_compressdev.a 00:03:05.057 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:05.057 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:05.057 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:05.057 [185/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:05.057 [186/268] Linking static target lib/librte_dmadev.a 00:03:05.057 [187/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:05.057 [188/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:05.057 [189/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:05.057 [190/268] Linking static target lib/librte_power.a 00:03:05.057 [191/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.057 [192/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.057 [193/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:05.057 [194/268] Linking static target lib/librte_hash.a 00:03:05.057 [195/268] Linking static target drivers/librte_bus_vdev.a 00:03:05.057 [196/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:05.057 [197/268] Linking static target lib/librte_security.a 00:03:05.057 [198/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:05.057 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:05.058 [200/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:05.058 [201/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:05.058 [202/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:05.058 [203/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.058 [204/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.058 [205/268] Linking static target drivers/librte_mempool_ring.a 00:03:05.058 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.058 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:05.058 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.058 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:05.058 [210/268] Linking static target drivers/librte_bus_pci.a 00:03:05.315 [211/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.315 [212/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:05.315 [213/268] Linking static target lib/librte_cryptodev.a 00:03:05.315 [214/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.315 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.315 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.315 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:05.315 [218/268] Linking static target lib/librte_ethdev.a 00:03:05.572 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.572 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.572 [221/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.572 [222/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.572 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.830 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:05.830 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.830 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.830 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.763 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:06.763 [229/268] Linking static target lib/librte_vhost.a 00:03:07.021 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.918 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.179 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.437 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.437 [234/268] Linking target lib/librte_eal.so.24.1 00:03:14.696 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:14.696 [236/268] Linking target lib/librte_ring.so.24.1 00:03:14.696 [237/268] Linking target lib/librte_meter.so.24.1 00:03:14.696 [238/268] Linking target lib/librte_timer.so.24.1 00:03:14.696 [239/268] Linking target lib/librte_pci.so.24.1 00:03:14.696 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:14.696 [241/268] Linking target lib/librte_dmadev.so.24.1 00:03:14.955 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:14.955 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:14.955 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:14.955 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:14.955 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:14.955 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:14.955 [248/268] Linking target lib/librte_mempool.so.24.1 00:03:14.955 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:14.955 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:14.955 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:14.955 [252/268] Linking target lib/librte_mbuf.so.24.1 00:03:14.955 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:15.213 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:15.213 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:15.213 [256/268] Linking target lib/librte_net.so.24.1 00:03:15.213 [257/268] Linking target lib/librte_reorder.so.24.1 00:03:15.213 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:15.471 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:15.471 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:15.471 [261/268] Linking target lib/librte_cmdline.so.24.1 00:03:15.471 [262/268] Linking target lib/librte_hash.so.24.1 00:03:15.471 [263/268] Linking target lib/librte_security.so.24.1 00:03:15.471 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:15.471 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:15.471 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:15.728 [267/268] Linking target lib/librte_power.so.24.1 00:03:15.728 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:15.728 INFO: autodetecting backend as ninja 00:03:15.728 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:03:27.925 CC lib/log/log.o 00:03:27.925 CC lib/log/log_flags.o 00:03:27.925 CC lib/log/log_deprecated.o 00:03:27.925 CC lib/ut_mock/mock.o 00:03:27.925 CC lib/ut/ut.o 00:03:27.925 LIB libspdk_ut.a 00:03:27.925 LIB libspdk_log.a 00:03:27.925 LIB libspdk_ut_mock.a 00:03:27.925 SO libspdk_ut.so.2.0 00:03:27.925 SO libspdk_ut_mock.so.6.0 00:03:27.925 SO libspdk_log.so.7.1 00:03:27.925 SYMLINK libspdk_ut.so 00:03:27.925 SYMLINK libspdk_ut_mock.so 00:03:27.925 SYMLINK libspdk_log.so 00:03:27.925 CC lib/ioat/ioat.o 00:03:27.925 CC lib/dma/dma.o 00:03:27.925 CC lib/util/base64.o 00:03:27.925 CC lib/util/bit_array.o 00:03:27.925 CC lib/util/cpuset.o 00:03:27.925 CC lib/util/crc16.o 00:03:27.925 CC lib/util/crc32.o 00:03:27.925 CXX lib/trace_parser/trace.o 00:03:27.925 CC lib/util/crc32c.o 00:03:27.925 CC lib/util/crc64.o 00:03:27.925 CC lib/util/crc32_ieee.o 00:03:27.925 CC lib/util/dif.o 00:03:27.926 CC lib/util/fd.o 00:03:27.926 CC lib/util/fd_group.o 00:03:27.926 CC lib/util/file.o 00:03:27.926 CC lib/util/hexlify.o 00:03:27.926 CC lib/util/iov.o 00:03:27.926 CC lib/util/math.o 00:03:27.926 CC lib/util/net.o 00:03:27.926 CC lib/util/pipe.o 00:03:27.926 CC lib/util/strerror_tls.o 00:03:27.926 CC lib/util/string.o 00:03:27.926 CC lib/util/uuid.o 00:03:27.926 CC lib/util/xor.o 00:03:27.926 CC lib/util/zipf.o 00:03:27.926 CC lib/util/md5.o 00:03:27.926 CC lib/vfio_user/host/vfio_user_pci.o 00:03:27.926 CC lib/vfio_user/host/vfio_user.o 00:03:27.926 LIB libspdk_dma.a 00:03:27.926 SO libspdk_dma.so.5.0 00:03:27.926 LIB libspdk_ioat.a 00:03:27.926 SYMLINK libspdk_dma.so 00:03:27.926 SO libspdk_ioat.so.7.0 00:03:27.926 SYMLINK libspdk_ioat.so 00:03:27.926 LIB libspdk_vfio_user.a 00:03:27.926 SO libspdk_vfio_user.so.5.0 00:03:27.926 SYMLINK libspdk_vfio_user.so 00:03:27.926 LIB libspdk_util.a 00:03:27.926 SO libspdk_util.so.10.1 00:03:27.926 SYMLINK libspdk_util.so 00:03:27.926 LIB libspdk_trace_parser.a 00:03:27.926 SO libspdk_trace_parser.so.6.0 00:03:27.926 SYMLINK libspdk_trace_parser.so 00:03:27.926 CC lib/env_dpdk/env.o 00:03:27.926 CC lib/env_dpdk/memory.o 00:03:27.926 CC lib/env_dpdk/pci.o 00:03:27.926 CC lib/env_dpdk/init.o 00:03:27.926 CC lib/env_dpdk/threads.o 00:03:27.926 CC lib/env_dpdk/pci_ioat.o 00:03:27.926 CC lib/env_dpdk/pci_virtio.o 00:03:27.926 CC lib/env_dpdk/pci_vmd.o 00:03:27.926 CC lib/env_dpdk/pci_idxd.o 00:03:27.926 CC lib/env_dpdk/pci_event.o 00:03:27.926 CC lib/env_dpdk/sigbus_handler.o 00:03:27.926 CC lib/env_dpdk/pci_dpdk.o 00:03:27.926 CC lib/json/json_parse.o 00:03:27.926 CC lib/vmd/vmd.o 00:03:27.926 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:27.926 CC lib/json/json_util.o 00:03:27.926 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:27.926 CC lib/rdma_utils/rdma_utils.o 00:03:27.926 CC lib/json/json_write.o 00:03:27.926 CC lib/vmd/led.o 00:03:27.926 CC lib/conf/conf.o 00:03:27.926 CC lib/idxd/idxd.o 00:03:27.926 CC lib/idxd/idxd_user.o 00:03:27.926 CC lib/idxd/idxd_kernel.o 00:03:27.926 LIB libspdk_conf.a 00:03:28.183 SO libspdk_conf.so.6.0 00:03:28.183 LIB libspdk_rdma_utils.a 00:03:28.183 LIB libspdk_json.a 00:03:28.183 SO libspdk_rdma_utils.so.1.0 00:03:28.183 SYMLINK libspdk_conf.so 00:03:28.183 SO libspdk_json.so.6.0 00:03:28.183 SYMLINK libspdk_rdma_utils.so 00:03:28.183 SYMLINK libspdk_json.so 00:03:28.183 LIB libspdk_idxd.a 00:03:28.183 LIB libspdk_vmd.a 00:03:28.441 SO libspdk_idxd.so.12.1 00:03:28.441 SO libspdk_vmd.so.6.0 00:03:28.441 SYMLINK libspdk_idxd.so 00:03:28.441 SYMLINK libspdk_vmd.so 00:03:28.441 CC lib/rdma_provider/common.o 00:03:28.441 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:28.441 CC lib/jsonrpc/jsonrpc_server.o 00:03:28.441 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:28.441 CC lib/jsonrpc/jsonrpc_client.o 00:03:28.441 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:28.698 LIB libspdk_rdma_provider.a 00:03:28.698 SO libspdk_rdma_provider.so.7.0 00:03:28.698 LIB libspdk_jsonrpc.a 00:03:28.699 SYMLINK libspdk_rdma_provider.so 00:03:28.699 SO libspdk_jsonrpc.so.6.0 00:03:28.699 SYMLINK libspdk_jsonrpc.so 00:03:28.699 LIB libspdk_env_dpdk.a 00:03:28.957 SO libspdk_env_dpdk.so.15.1 00:03:28.957 SYMLINK libspdk_env_dpdk.so 00:03:29.216 CC lib/rpc/rpc.o 00:03:29.216 LIB libspdk_rpc.a 00:03:29.216 SO libspdk_rpc.so.6.0 00:03:29.474 SYMLINK libspdk_rpc.so 00:03:29.731 CC lib/trace/trace.o 00:03:29.731 CC lib/trace/trace_flags.o 00:03:29.731 CC lib/trace/trace_rpc.o 00:03:29.731 CC lib/notify/notify.o 00:03:29.731 CC lib/keyring/keyring.o 00:03:29.731 CC lib/notify/notify_rpc.o 00:03:29.731 CC lib/keyring/keyring_rpc.o 00:03:29.989 LIB libspdk_notify.a 00:03:29.989 LIB libspdk_keyring.a 00:03:29.989 SO libspdk_notify.so.6.0 00:03:29.989 SO libspdk_keyring.so.2.0 00:03:29.989 LIB libspdk_trace.a 00:03:29.989 SYMLINK libspdk_notify.so 00:03:29.989 SO libspdk_trace.so.11.0 00:03:29.989 SYMLINK libspdk_keyring.so 00:03:29.989 SYMLINK libspdk_trace.so 00:03:30.248 CC lib/sock/sock.o 00:03:30.248 CC lib/sock/sock_rpc.o 00:03:30.248 CC lib/thread/thread.o 00:03:30.248 CC lib/thread/iobuf.o 00:03:30.815 LIB libspdk_sock.a 00:03:30.815 SO libspdk_sock.so.10.0 00:03:30.815 SYMLINK libspdk_sock.so 00:03:31.083 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:31.083 CC lib/nvme/nvme_ctrlr.o 00:03:31.083 CC lib/nvme/nvme_fabric.o 00:03:31.083 CC lib/nvme/nvme_ns_cmd.o 00:03:31.083 CC lib/nvme/nvme_ns.o 00:03:31.083 CC lib/nvme/nvme_pcie_common.o 00:03:31.083 CC lib/nvme/nvme_pcie.o 00:03:31.083 CC lib/nvme/nvme_qpair.o 00:03:31.083 CC lib/nvme/nvme.o 00:03:31.083 CC lib/nvme/nvme_quirks.o 00:03:31.083 CC lib/nvme/nvme_transport.o 00:03:31.083 CC lib/nvme/nvme_discovery.o 00:03:31.083 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:31.083 CC lib/nvme/nvme_tcp.o 00:03:31.083 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:31.083 CC lib/nvme/nvme_opal.o 00:03:31.083 CC lib/nvme/nvme_io_msg.o 00:03:31.083 CC lib/nvme/nvme_zns.o 00:03:31.083 CC lib/nvme/nvme_poll_group.o 00:03:31.083 CC lib/nvme/nvme_stubs.o 00:03:31.083 CC lib/nvme/nvme_auth.o 00:03:31.083 CC lib/nvme/nvme_cuse.o 00:03:31.083 CC lib/nvme/nvme_vfio_user.o 00:03:31.083 CC lib/nvme/nvme_rdma.o 00:03:31.341 LIB libspdk_thread.a 00:03:31.341 SO libspdk_thread.so.11.0 00:03:31.600 SYMLINK libspdk_thread.so 00:03:31.858 CC lib/accel/accel.o 00:03:31.858 CC lib/accel/accel_rpc.o 00:03:31.858 CC lib/accel/accel_sw.o 00:03:31.858 CC lib/blob/blobstore.o 00:03:31.858 CC lib/blob/request.o 00:03:31.858 CC lib/virtio/virtio.o 00:03:31.858 CC lib/blob/zeroes.o 00:03:31.858 CC lib/vfu_tgt/tgt_endpoint.o 00:03:31.858 CC lib/virtio/virtio_vhost_user.o 00:03:31.858 CC lib/blob/blob_bs_dev.o 00:03:31.858 CC lib/virtio/virtio_vfio_user.o 00:03:31.858 CC lib/vfu_tgt/tgt_rpc.o 00:03:31.858 CC lib/init/json_config.o 00:03:31.858 CC lib/virtio/virtio_pci.o 00:03:31.858 CC lib/init/subsystem.o 00:03:31.858 CC lib/init/subsystem_rpc.o 00:03:31.858 CC lib/init/rpc.o 00:03:31.858 CC lib/fsdev/fsdev.o 00:03:31.858 CC lib/fsdev/fsdev_io.o 00:03:31.858 CC lib/fsdev/fsdev_rpc.o 00:03:32.117 LIB libspdk_init.a 00:03:32.117 SO libspdk_init.so.6.0 00:03:32.117 LIB libspdk_vfu_tgt.a 00:03:32.117 LIB libspdk_virtio.a 00:03:32.117 SO libspdk_vfu_tgt.so.3.0 00:03:32.117 SYMLINK libspdk_init.so 00:03:32.117 SO libspdk_virtio.so.7.0 00:03:32.117 SYMLINK libspdk_vfu_tgt.so 00:03:32.117 SYMLINK libspdk_virtio.so 00:03:32.376 LIB libspdk_fsdev.a 00:03:32.376 SO libspdk_fsdev.so.2.0 00:03:32.376 CC lib/event/app.o 00:03:32.376 CC lib/event/reactor.o 00:03:32.376 CC lib/event/log_rpc.o 00:03:32.376 CC lib/event/app_rpc.o 00:03:32.376 CC lib/event/scheduler_static.o 00:03:32.376 SYMLINK libspdk_fsdev.so 00:03:32.635 LIB libspdk_accel.a 00:03:32.635 SO libspdk_accel.so.16.0 00:03:32.635 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:32.635 SYMLINK libspdk_accel.so 00:03:32.895 LIB libspdk_event.a 00:03:32.895 LIB libspdk_nvme.a 00:03:32.895 SO libspdk_event.so.14.0 00:03:32.895 SYMLINK libspdk_event.so 00:03:32.895 SO libspdk_nvme.so.15.0 00:03:33.154 CC lib/bdev/bdev.o 00:03:33.154 CC lib/bdev/bdev_rpc.o 00:03:33.154 CC lib/bdev/bdev_zone.o 00:03:33.154 CC lib/bdev/part.o 00:03:33.154 CC lib/bdev/scsi_nvme.o 00:03:33.154 SYMLINK libspdk_nvme.so 00:03:33.154 LIB libspdk_fuse_dispatcher.a 00:03:33.154 SO libspdk_fuse_dispatcher.so.1.0 00:03:33.413 SYMLINK libspdk_fuse_dispatcher.so 00:03:33.981 LIB libspdk_blob.a 00:03:33.981 SO libspdk_blob.so.12.0 00:03:33.981 SYMLINK libspdk_blob.so 00:03:34.549 CC lib/blobfs/blobfs.o 00:03:34.549 CC lib/blobfs/tree.o 00:03:34.549 CC lib/lvol/lvol.o 00:03:34.809 LIB libspdk_bdev.a 00:03:35.068 SO libspdk_bdev.so.17.0 00:03:35.068 LIB libspdk_blobfs.a 00:03:35.068 SO libspdk_blobfs.so.11.0 00:03:35.068 SYMLINK libspdk_bdev.so 00:03:35.068 LIB libspdk_lvol.a 00:03:35.068 SYMLINK libspdk_blobfs.so 00:03:35.068 SO libspdk_lvol.so.11.0 00:03:35.068 SYMLINK libspdk_lvol.so 00:03:35.327 CC lib/ublk/ublk.o 00:03:35.327 CC lib/ublk/ublk_rpc.o 00:03:35.327 CC lib/nvmf/ctrlr.o 00:03:35.327 CC lib/nvmf/ctrlr_discovery.o 00:03:35.327 CC lib/nvmf/ctrlr_bdev.o 00:03:35.327 CC lib/nvmf/subsystem.o 00:03:35.327 CC lib/nvmf/nvmf.o 00:03:35.327 CC lib/nvmf/transport.o 00:03:35.327 CC lib/nvmf/nvmf_rpc.o 00:03:35.327 CC lib/nvmf/stubs.o 00:03:35.327 CC lib/nbd/nbd.o 00:03:35.327 CC lib/nvmf/tcp.o 00:03:35.327 CC lib/nbd/nbd_rpc.o 00:03:35.327 CC lib/nvmf/mdns_server.o 00:03:35.327 CC lib/scsi/dev.o 00:03:35.327 CC lib/nvmf/vfio_user.o 00:03:35.327 CC lib/ftl/ftl_core.o 00:03:35.327 CC lib/scsi/lun.o 00:03:35.327 CC lib/nvmf/rdma.o 00:03:35.327 CC lib/ftl/ftl_init.o 00:03:35.327 CC lib/scsi/port.o 00:03:35.327 CC lib/nvmf/auth.o 00:03:35.327 CC lib/scsi/scsi.o 00:03:35.327 CC lib/ftl/ftl_layout.o 00:03:35.327 CC lib/ftl/ftl_debug.o 00:03:35.327 CC lib/scsi/scsi_bdev.o 00:03:35.327 CC lib/ftl/ftl_io.o 00:03:35.327 CC lib/scsi/scsi_pr.o 00:03:35.327 CC lib/ftl/ftl_sb.o 00:03:35.327 CC lib/scsi/scsi_rpc.o 00:03:35.327 CC lib/ftl/ftl_l2p_flat.o 00:03:35.327 CC lib/ftl/ftl_l2p.o 00:03:35.327 CC lib/scsi/task.o 00:03:35.327 CC lib/ftl/ftl_nv_cache.o 00:03:35.327 CC lib/ftl/ftl_band.o 00:03:35.327 CC lib/ftl/ftl_band_ops.o 00:03:35.327 CC lib/ftl/ftl_writer.o 00:03:35.327 CC lib/ftl/ftl_rq.o 00:03:35.327 CC lib/ftl/ftl_reloc.o 00:03:35.327 CC lib/ftl/ftl_l2p_cache.o 00:03:35.327 CC lib/ftl/ftl_p2l.o 00:03:35.327 CC lib/ftl/ftl_p2l_log.o 00:03:35.327 CC lib/ftl/mngt/ftl_mngt.o 00:03:35.327 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:35.327 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:35.327 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:35.327 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:35.327 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:35.327 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:35.327 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:35.327 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:35.327 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:35.327 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:35.327 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:35.327 CC lib/ftl/utils/ftl_conf.o 00:03:35.327 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:35.327 CC lib/ftl/utils/ftl_mempool.o 00:03:35.327 CC lib/ftl/utils/ftl_md.o 00:03:35.327 CC lib/ftl/utils/ftl_bitmap.o 00:03:35.327 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:35.327 CC lib/ftl/utils/ftl_property.o 00:03:35.327 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:35.327 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:35.327 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:35.327 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:35.327 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:35.327 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:35.327 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:35.327 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:35.327 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:35.327 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:35.327 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:35.327 CC lib/ftl/base/ftl_base_bdev.o 00:03:35.327 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:35.327 CC lib/ftl/base/ftl_base_dev.o 00:03:35.327 CC lib/ftl/ftl_trace.o 00:03:35.895 LIB libspdk_nbd.a 00:03:35.895 SO libspdk_nbd.so.7.0 00:03:35.895 LIB libspdk_ublk.a 00:03:36.153 SO libspdk_ublk.so.3.0 00:03:36.153 SYMLINK libspdk_nbd.so 00:03:36.153 LIB libspdk_scsi.a 00:03:36.153 SYMLINK libspdk_ublk.so 00:03:36.153 SO libspdk_scsi.so.9.0 00:03:36.153 SYMLINK libspdk_scsi.so 00:03:36.410 LIB libspdk_ftl.a 00:03:36.410 CC lib/iscsi/conn.o 00:03:36.410 CC lib/iscsi/init_grp.o 00:03:36.410 CC lib/iscsi/iscsi.o 00:03:36.410 CC lib/iscsi/param.o 00:03:36.410 CC lib/iscsi/portal_grp.o 00:03:36.410 CC lib/iscsi/tgt_node.o 00:03:36.410 CC lib/vhost/vhost.o 00:03:36.410 CC lib/iscsi/iscsi_subsystem.o 00:03:36.410 CC lib/iscsi/iscsi_rpc.o 00:03:36.410 CC lib/vhost/vhost_rpc.o 00:03:36.410 CC lib/iscsi/task.o 00:03:36.410 CC lib/vhost/vhost_scsi.o 00:03:36.410 CC lib/vhost/vhost_blk.o 00:03:36.410 CC lib/vhost/rte_vhost_user.o 00:03:36.667 SO libspdk_ftl.so.9.0 00:03:36.667 SYMLINK libspdk_ftl.so 00:03:37.232 LIB libspdk_nvmf.a 00:03:37.232 SO libspdk_nvmf.so.20.0 00:03:37.232 LIB libspdk_vhost.a 00:03:37.232 SO libspdk_vhost.so.8.0 00:03:37.490 SYMLINK libspdk_nvmf.so 00:03:37.490 SYMLINK libspdk_vhost.so 00:03:37.490 LIB libspdk_iscsi.a 00:03:37.490 SO libspdk_iscsi.so.8.0 00:03:37.748 SYMLINK libspdk_iscsi.so 00:03:38.326 CC module/env_dpdk/env_dpdk_rpc.o 00:03:38.326 CC module/vfu_device/vfu_virtio_blk.o 00:03:38.326 CC module/vfu_device/vfu_virtio.o 00:03:38.326 CC module/vfu_device/vfu_virtio_scsi.o 00:03:38.326 CC module/vfu_device/vfu_virtio_fs.o 00:03:38.326 CC module/vfu_device/vfu_virtio_rpc.o 00:03:38.326 CC module/accel/ioat/accel_ioat_rpc.o 00:03:38.326 CC module/accel/ioat/accel_ioat.o 00:03:38.326 CC module/sock/posix/posix.o 00:03:38.326 CC module/accel/error/accel_error_rpc.o 00:03:38.326 CC module/accel/error/accel_error.o 00:03:38.326 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:38.326 CC module/accel/iaa/accel_iaa.o 00:03:38.326 CC module/accel/iaa/accel_iaa_rpc.o 00:03:38.326 CC module/accel/dsa/accel_dsa.o 00:03:38.326 LIB libspdk_env_dpdk_rpc.a 00:03:38.326 CC module/accel/dsa/accel_dsa_rpc.o 00:03:38.326 CC module/keyring/linux/keyring.o 00:03:38.326 CC module/keyring/linux/keyring_rpc.o 00:03:38.326 CC module/scheduler/gscheduler/gscheduler.o 00:03:38.326 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:38.326 CC module/blob/bdev/blob_bdev.o 00:03:38.326 CC module/keyring/file/keyring.o 00:03:38.326 CC module/keyring/file/keyring_rpc.o 00:03:38.326 CC module/fsdev/aio/fsdev_aio.o 00:03:38.326 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:38.326 CC module/fsdev/aio/linux_aio_mgr.o 00:03:38.326 SO libspdk_env_dpdk_rpc.so.6.0 00:03:38.326 SYMLINK libspdk_env_dpdk_rpc.so 00:03:38.584 LIB libspdk_scheduler_gscheduler.a 00:03:38.584 LIB libspdk_scheduler_dpdk_governor.a 00:03:38.584 LIB libspdk_keyring_file.a 00:03:38.584 LIB libspdk_accel_ioat.a 00:03:38.584 LIB libspdk_keyring_linux.a 00:03:38.584 SO libspdk_keyring_file.so.2.0 00:03:38.584 LIB libspdk_accel_error.a 00:03:38.584 SO libspdk_scheduler_gscheduler.so.4.0 00:03:38.584 SO libspdk_accel_ioat.so.6.0 00:03:38.584 LIB libspdk_scheduler_dynamic.a 00:03:38.584 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:38.584 LIB libspdk_accel_iaa.a 00:03:38.584 SO libspdk_keyring_linux.so.1.0 00:03:38.584 SO libspdk_accel_error.so.2.0 00:03:38.584 SO libspdk_scheduler_dynamic.so.4.0 00:03:38.584 SO libspdk_accel_iaa.so.3.0 00:03:38.584 SYMLINK libspdk_scheduler_gscheduler.so 00:03:38.584 SYMLINK libspdk_keyring_file.so 00:03:38.584 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:38.584 LIB libspdk_blob_bdev.a 00:03:38.584 SYMLINK libspdk_accel_ioat.so 00:03:38.584 SYMLINK libspdk_keyring_linux.so 00:03:38.584 LIB libspdk_accel_dsa.a 00:03:38.584 SYMLINK libspdk_scheduler_dynamic.so 00:03:38.584 SYMLINK libspdk_accel_error.so 00:03:38.584 SYMLINK libspdk_accel_iaa.so 00:03:38.584 SO libspdk_blob_bdev.so.12.0 00:03:38.584 SO libspdk_accel_dsa.so.5.0 00:03:38.584 SYMLINK libspdk_blob_bdev.so 00:03:38.584 SYMLINK libspdk_accel_dsa.so 00:03:38.841 LIB libspdk_vfu_device.a 00:03:38.841 SO libspdk_vfu_device.so.3.0 00:03:38.841 SYMLINK libspdk_vfu_device.so 00:03:38.841 LIB libspdk_fsdev_aio.a 00:03:38.841 LIB libspdk_sock_posix.a 00:03:38.841 SO libspdk_fsdev_aio.so.1.0 00:03:38.841 SO libspdk_sock_posix.so.6.0 00:03:39.099 SYMLINK libspdk_fsdev_aio.so 00:03:39.099 SYMLINK libspdk_sock_posix.so 00:03:39.099 CC module/bdev/delay/vbdev_delay.o 00:03:39.099 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:39.099 CC module/bdev/error/vbdev_error_rpc.o 00:03:39.099 CC module/bdev/error/vbdev_error.o 00:03:39.099 CC module/bdev/lvol/vbdev_lvol.o 00:03:39.099 CC module/bdev/raid/bdev_raid.o 00:03:39.099 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:39.099 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:39.099 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:39.099 CC module/bdev/raid/bdev_raid_sb.o 00:03:39.099 CC module/bdev/raid/raid0.o 00:03:39.099 CC module/bdev/raid/bdev_raid_rpc.o 00:03:39.099 CC module/bdev/raid/raid1.o 00:03:39.099 CC module/bdev/raid/concat.o 00:03:39.099 CC module/bdev/malloc/bdev_malloc.o 00:03:39.099 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:39.099 CC module/bdev/null/bdev_null.o 00:03:39.099 CC module/bdev/iscsi/bdev_iscsi.o 00:03:39.099 CC module/bdev/gpt/vbdev_gpt.o 00:03:39.099 CC module/bdev/nvme/bdev_nvme.o 00:03:39.099 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:39.099 CC module/bdev/null/bdev_null_rpc.o 00:03:39.099 CC module/bdev/gpt/gpt.o 00:03:39.099 CC module/blobfs/bdev/blobfs_bdev.o 00:03:39.099 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:39.099 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:39.099 CC module/bdev/nvme/nvme_rpc.o 00:03:39.099 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:39.099 CC module/bdev/ftl/bdev_ftl.o 00:03:39.099 CC module/bdev/nvme/bdev_mdns_client.o 00:03:39.099 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:39.099 CC module/bdev/aio/bdev_aio.o 00:03:39.099 CC module/bdev/nvme/vbdev_opal.o 00:03:39.099 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:39.099 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:39.099 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:39.099 CC module/bdev/aio/bdev_aio_rpc.o 00:03:39.099 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:39.099 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:39.099 CC module/bdev/passthru/vbdev_passthru.o 00:03:39.099 CC module/bdev/split/vbdev_split_rpc.o 00:03:39.099 CC module/bdev/split/vbdev_split.o 00:03:39.356 LIB libspdk_blobfs_bdev.a 00:03:39.356 SO libspdk_blobfs_bdev.so.6.0 00:03:39.356 LIB libspdk_bdev_error.a 00:03:39.356 LIB libspdk_bdev_null.a 00:03:39.356 LIB libspdk_bdev_split.a 00:03:39.356 SO libspdk_bdev_error.so.6.0 00:03:39.356 SO libspdk_bdev_null.so.6.0 00:03:39.615 LIB libspdk_bdev_gpt.a 00:03:39.615 SO libspdk_bdev_split.so.6.0 00:03:39.615 SYMLINK libspdk_blobfs_bdev.so 00:03:39.615 SO libspdk_bdev_gpt.so.6.0 00:03:39.615 LIB libspdk_bdev_ftl.a 00:03:39.615 LIB libspdk_bdev_aio.a 00:03:39.615 LIB libspdk_bdev_passthru.a 00:03:39.615 LIB libspdk_bdev_zone_block.a 00:03:39.615 SYMLINK libspdk_bdev_error.so 00:03:39.615 LIB libspdk_bdev_iscsi.a 00:03:39.615 SO libspdk_bdev_aio.so.6.0 00:03:39.615 SO libspdk_bdev_ftl.so.6.0 00:03:39.615 SYMLINK libspdk_bdev_null.so 00:03:39.615 SYMLINK libspdk_bdev_split.so 00:03:39.615 SO libspdk_bdev_passthru.so.6.0 00:03:39.615 LIB libspdk_bdev_delay.a 00:03:39.615 SYMLINK libspdk_bdev_gpt.so 00:03:39.615 SO libspdk_bdev_zone_block.so.6.0 00:03:39.615 LIB libspdk_bdev_malloc.a 00:03:39.615 SO libspdk_bdev_iscsi.so.6.0 00:03:39.615 SO libspdk_bdev_delay.so.6.0 00:03:39.615 SYMLINK libspdk_bdev_aio.so 00:03:39.615 SYMLINK libspdk_bdev_ftl.so 00:03:39.615 SO libspdk_bdev_malloc.so.6.0 00:03:39.615 SYMLINK libspdk_bdev_passthru.so 00:03:39.615 SYMLINK libspdk_bdev_zone_block.so 00:03:39.615 LIB libspdk_bdev_lvol.a 00:03:39.615 SYMLINK libspdk_bdev_iscsi.so 00:03:39.615 SYMLINK libspdk_bdev_delay.so 00:03:39.615 LIB libspdk_bdev_virtio.a 00:03:39.615 SYMLINK libspdk_bdev_malloc.so 00:03:39.615 SO libspdk_bdev_lvol.so.6.0 00:03:39.615 SO libspdk_bdev_virtio.so.6.0 00:03:39.873 SYMLINK libspdk_bdev_lvol.so 00:03:39.873 SYMLINK libspdk_bdev_virtio.so 00:03:40.131 LIB libspdk_bdev_raid.a 00:03:40.131 SO libspdk_bdev_raid.so.6.0 00:03:40.131 SYMLINK libspdk_bdev_raid.so 00:03:41.066 LIB libspdk_bdev_nvme.a 00:03:41.066 SO libspdk_bdev_nvme.so.7.1 00:03:41.325 SYMLINK libspdk_bdev_nvme.so 00:03:41.892 CC module/event/subsystems/vmd/vmd.o 00:03:41.892 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:41.892 CC module/event/subsystems/iobuf/iobuf.o 00:03:41.892 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:41.892 CC module/event/subsystems/keyring/keyring.o 00:03:41.892 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:41.892 CC module/event/subsystems/scheduler/scheduler.o 00:03:41.892 CC module/event/subsystems/fsdev/fsdev.o 00:03:41.892 CC module/event/subsystems/sock/sock.o 00:03:41.892 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:41.892 LIB libspdk_event_keyring.a 00:03:41.892 LIB libspdk_event_fsdev.a 00:03:41.892 LIB libspdk_event_vhost_blk.a 00:03:41.892 LIB libspdk_event_vmd.a 00:03:41.892 LIB libspdk_event_vfu_tgt.a 00:03:41.892 LIB libspdk_event_sock.a 00:03:41.892 LIB libspdk_event_scheduler.a 00:03:41.892 SO libspdk_event_fsdev.so.1.0 00:03:41.892 LIB libspdk_event_iobuf.a 00:03:41.892 SO libspdk_event_keyring.so.1.0 00:03:41.892 SO libspdk_event_vhost_blk.so.3.0 00:03:42.152 SO libspdk_event_vmd.so.6.0 00:03:42.152 SO libspdk_event_scheduler.so.4.0 00:03:42.152 SO libspdk_event_sock.so.5.0 00:03:42.152 SO libspdk_event_vfu_tgt.so.3.0 00:03:42.152 SO libspdk_event_iobuf.so.3.0 00:03:42.152 SYMLINK libspdk_event_fsdev.so 00:03:42.152 SYMLINK libspdk_event_keyring.so 00:03:42.152 SYMLINK libspdk_event_vhost_blk.so 00:03:42.152 SYMLINK libspdk_event_vmd.so 00:03:42.152 SYMLINK libspdk_event_scheduler.so 00:03:42.152 SYMLINK libspdk_event_sock.so 00:03:42.152 SYMLINK libspdk_event_iobuf.so 00:03:42.152 SYMLINK libspdk_event_vfu_tgt.so 00:03:42.411 CC module/event/subsystems/accel/accel.o 00:03:42.670 LIB libspdk_event_accel.a 00:03:42.670 SO libspdk_event_accel.so.6.0 00:03:42.670 SYMLINK libspdk_event_accel.so 00:03:42.928 CC module/event/subsystems/bdev/bdev.o 00:03:43.186 LIB libspdk_event_bdev.a 00:03:43.186 SO libspdk_event_bdev.so.6.0 00:03:43.186 SYMLINK libspdk_event_bdev.so 00:03:43.445 CC module/event/subsystems/scsi/scsi.o 00:03:43.445 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:43.445 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:43.445 CC module/event/subsystems/nbd/nbd.o 00:03:43.445 CC module/event/subsystems/ublk/ublk.o 00:03:43.703 LIB libspdk_event_ublk.a 00:03:43.703 LIB libspdk_event_nbd.a 00:03:43.703 LIB libspdk_event_scsi.a 00:03:43.703 SO libspdk_event_ublk.so.3.0 00:03:43.703 SO libspdk_event_nbd.so.6.0 00:03:43.703 SO libspdk_event_scsi.so.6.0 00:03:43.703 LIB libspdk_event_nvmf.a 00:03:43.703 SYMLINK libspdk_event_ublk.so 00:03:43.703 SYMLINK libspdk_event_nbd.so 00:03:43.703 SYMLINK libspdk_event_scsi.so 00:03:43.703 SO libspdk_event_nvmf.so.6.0 00:03:43.960 SYMLINK libspdk_event_nvmf.so 00:03:44.218 CC module/event/subsystems/iscsi/iscsi.o 00:03:44.218 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:44.218 LIB libspdk_event_iscsi.a 00:03:44.218 LIB libspdk_event_vhost_scsi.a 00:03:44.218 SO libspdk_event_vhost_scsi.so.3.0 00:03:44.218 SO libspdk_event_iscsi.so.6.0 00:03:44.218 SYMLINK libspdk_event_vhost_scsi.so 00:03:44.218 SYMLINK libspdk_event_iscsi.so 00:03:44.477 SO libspdk.so.6.0 00:03:44.477 SYMLINK libspdk.so 00:03:44.737 CC app/spdk_lspci/spdk_lspci.o 00:03:45.011 CC app/trace_record/trace_record.o 00:03:45.011 CXX app/trace/trace.o 00:03:45.011 CC app/spdk_top/spdk_top.o 00:03:45.011 CC app/spdk_nvme_perf/perf.o 00:03:45.011 CC test/rpc_client/rpc_client_test.o 00:03:45.011 CC app/spdk_nvme_identify/identify.o 00:03:45.011 TEST_HEADER include/spdk/accel.h 00:03:45.011 TEST_HEADER include/spdk/accel_module.h 00:03:45.011 TEST_HEADER include/spdk/barrier.h 00:03:45.011 TEST_HEADER include/spdk/assert.h 00:03:45.011 TEST_HEADER include/spdk/bdev_module.h 00:03:45.011 TEST_HEADER include/spdk/base64.h 00:03:45.011 TEST_HEADER include/spdk/bdev.h 00:03:45.011 CC app/spdk_nvme_discover/discovery_aer.o 00:03:45.011 TEST_HEADER include/spdk/bdev_zone.h 00:03:45.012 TEST_HEADER include/spdk/bit_pool.h 00:03:45.012 TEST_HEADER include/spdk/bit_array.h 00:03:45.012 TEST_HEADER include/spdk/blob_bdev.h 00:03:45.012 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:45.012 TEST_HEADER include/spdk/blobfs.h 00:03:45.012 TEST_HEADER include/spdk/blob.h 00:03:45.012 TEST_HEADER include/spdk/config.h 00:03:45.012 TEST_HEADER include/spdk/conf.h 00:03:45.012 TEST_HEADER include/spdk/crc16.h 00:03:45.012 TEST_HEADER include/spdk/cpuset.h 00:03:45.012 TEST_HEADER include/spdk/crc32.h 00:03:45.012 TEST_HEADER include/spdk/dif.h 00:03:45.012 TEST_HEADER include/spdk/dma.h 00:03:45.012 TEST_HEADER include/spdk/crc64.h 00:03:45.012 TEST_HEADER include/spdk/endian.h 00:03:45.012 TEST_HEADER include/spdk/env_dpdk.h 00:03:45.012 TEST_HEADER include/spdk/env.h 00:03:45.012 TEST_HEADER include/spdk/event.h 00:03:45.012 TEST_HEADER include/spdk/fd_group.h 00:03:45.012 TEST_HEADER include/spdk/fd.h 00:03:45.012 TEST_HEADER include/spdk/file.h 00:03:45.012 TEST_HEADER include/spdk/fsdev_module.h 00:03:45.012 TEST_HEADER include/spdk/fsdev.h 00:03:45.012 TEST_HEADER include/spdk/ftl.h 00:03:45.012 TEST_HEADER include/spdk/gpt_spec.h 00:03:45.012 TEST_HEADER include/spdk/hexlify.h 00:03:45.012 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:45.012 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:45.012 TEST_HEADER include/spdk/idxd.h 00:03:45.012 TEST_HEADER include/spdk/histogram_data.h 00:03:45.012 TEST_HEADER include/spdk/idxd_spec.h 00:03:45.012 CC app/spdk_dd/spdk_dd.o 00:03:45.012 TEST_HEADER include/spdk/ioat.h 00:03:45.012 CC app/iscsi_tgt/iscsi_tgt.o 00:03:45.012 TEST_HEADER include/spdk/init.h 00:03:45.012 TEST_HEADER include/spdk/ioat_spec.h 00:03:45.012 TEST_HEADER include/spdk/iscsi_spec.h 00:03:45.012 TEST_HEADER include/spdk/json.h 00:03:45.012 TEST_HEADER include/spdk/jsonrpc.h 00:03:45.012 TEST_HEADER include/spdk/keyring_module.h 00:03:45.012 CC app/nvmf_tgt/nvmf_main.o 00:03:45.012 TEST_HEADER include/spdk/likely.h 00:03:45.012 TEST_HEADER include/spdk/keyring.h 00:03:45.012 TEST_HEADER include/spdk/log.h 00:03:45.012 TEST_HEADER include/spdk/md5.h 00:03:45.012 TEST_HEADER include/spdk/memory.h 00:03:45.012 TEST_HEADER include/spdk/lvol.h 00:03:45.012 TEST_HEADER include/spdk/mmio.h 00:03:45.012 TEST_HEADER include/spdk/net.h 00:03:45.012 TEST_HEADER include/spdk/nbd.h 00:03:45.012 TEST_HEADER include/spdk/notify.h 00:03:45.012 TEST_HEADER include/spdk/nvme.h 00:03:45.012 TEST_HEADER include/spdk/nvme_intel.h 00:03:45.012 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:45.012 TEST_HEADER include/spdk/nvme_spec.h 00:03:45.012 TEST_HEADER include/spdk/nvme_zns.h 00:03:45.012 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:45.012 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:45.012 TEST_HEADER include/spdk/nvmf.h 00:03:45.012 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:45.012 TEST_HEADER include/spdk/nvmf_spec.h 00:03:45.012 TEST_HEADER include/spdk/nvmf_transport.h 00:03:45.012 TEST_HEADER include/spdk/opal.h 00:03:45.012 TEST_HEADER include/spdk/pci_ids.h 00:03:45.012 TEST_HEADER include/spdk/opal_spec.h 00:03:45.012 TEST_HEADER include/spdk/queue.h 00:03:45.012 TEST_HEADER include/spdk/pipe.h 00:03:45.012 TEST_HEADER include/spdk/rpc.h 00:03:45.012 TEST_HEADER include/spdk/reduce.h 00:03:45.012 TEST_HEADER include/spdk/scheduler.h 00:03:45.012 TEST_HEADER include/spdk/scsi_spec.h 00:03:45.012 TEST_HEADER include/spdk/scsi.h 00:03:45.012 TEST_HEADER include/spdk/sock.h 00:03:45.012 TEST_HEADER include/spdk/stdinc.h 00:03:45.012 TEST_HEADER include/spdk/string.h 00:03:45.012 TEST_HEADER include/spdk/thread.h 00:03:45.012 TEST_HEADER include/spdk/trace.h 00:03:45.012 TEST_HEADER include/spdk/tree.h 00:03:45.012 TEST_HEADER include/spdk/trace_parser.h 00:03:45.012 TEST_HEADER include/spdk/ublk.h 00:03:45.012 TEST_HEADER include/spdk/util.h 00:03:45.012 TEST_HEADER include/spdk/uuid.h 00:03:45.012 TEST_HEADER include/spdk/version.h 00:03:45.012 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:45.012 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:45.012 TEST_HEADER include/spdk/vhost.h 00:03:45.012 TEST_HEADER include/spdk/vmd.h 00:03:45.012 TEST_HEADER include/spdk/xor.h 00:03:45.012 CXX test/cpp_headers/accel.o 00:03:45.012 TEST_HEADER include/spdk/zipf.h 00:03:45.012 CXX test/cpp_headers/accel_module.o 00:03:45.012 CXX test/cpp_headers/barrier.o 00:03:45.012 CXX test/cpp_headers/assert.o 00:03:45.012 CXX test/cpp_headers/base64.o 00:03:45.012 CXX test/cpp_headers/bdev.o 00:03:45.012 CXX test/cpp_headers/bdev_zone.o 00:03:45.012 CXX test/cpp_headers/bit_array.o 00:03:45.012 CC app/spdk_tgt/spdk_tgt.o 00:03:45.012 CXX test/cpp_headers/bdev_module.o 00:03:45.012 CXX test/cpp_headers/bit_pool.o 00:03:45.012 CXX test/cpp_headers/blob_bdev.o 00:03:45.012 CXX test/cpp_headers/blobfs_bdev.o 00:03:45.012 CXX test/cpp_headers/blobfs.o 00:03:45.012 CXX test/cpp_headers/conf.o 00:03:45.012 CXX test/cpp_headers/blob.o 00:03:45.012 CXX test/cpp_headers/config.o 00:03:45.012 CXX test/cpp_headers/cpuset.o 00:03:45.012 CXX test/cpp_headers/crc16.o 00:03:45.012 CXX test/cpp_headers/crc32.o 00:03:45.012 CXX test/cpp_headers/crc64.o 00:03:45.012 CXX test/cpp_headers/dif.o 00:03:45.012 CXX test/cpp_headers/dma.o 00:03:45.012 CXX test/cpp_headers/env.o 00:03:45.012 CXX test/cpp_headers/endian.o 00:03:45.012 CXX test/cpp_headers/env_dpdk.o 00:03:45.012 CXX test/cpp_headers/event.o 00:03:45.012 CXX test/cpp_headers/fd_group.o 00:03:45.012 CXX test/cpp_headers/fd.o 00:03:45.012 CXX test/cpp_headers/file.o 00:03:45.012 CXX test/cpp_headers/fsdev.o 00:03:45.012 CXX test/cpp_headers/fsdev_module.o 00:03:45.012 CXX test/cpp_headers/ftl.o 00:03:45.012 CXX test/cpp_headers/fuse_dispatcher.o 00:03:45.012 CXX test/cpp_headers/histogram_data.o 00:03:45.012 CXX test/cpp_headers/hexlify.o 00:03:45.012 CXX test/cpp_headers/gpt_spec.o 00:03:45.012 CXX test/cpp_headers/idxd.o 00:03:45.012 CXX test/cpp_headers/init.o 00:03:45.012 CXX test/cpp_headers/idxd_spec.o 00:03:45.012 CXX test/cpp_headers/ioat_spec.o 00:03:45.012 CXX test/cpp_headers/iscsi_spec.o 00:03:45.012 CXX test/cpp_headers/ioat.o 00:03:45.012 CXX test/cpp_headers/jsonrpc.o 00:03:45.012 CXX test/cpp_headers/keyring.o 00:03:45.012 CXX test/cpp_headers/json.o 00:03:45.012 CXX test/cpp_headers/keyring_module.o 00:03:45.012 CXX test/cpp_headers/likely.o 00:03:45.012 CXX test/cpp_headers/log.o 00:03:45.012 CXX test/cpp_headers/md5.o 00:03:45.012 CXX test/cpp_headers/lvol.o 00:03:45.012 CXX test/cpp_headers/mmio.o 00:03:45.012 CXX test/cpp_headers/memory.o 00:03:45.012 CXX test/cpp_headers/net.o 00:03:45.012 CXX test/cpp_headers/nbd.o 00:03:45.012 CXX test/cpp_headers/nvme_ocssd.o 00:03:45.012 CXX test/cpp_headers/nvme_intel.o 00:03:45.012 CXX test/cpp_headers/notify.o 00:03:45.012 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:45.012 CXX test/cpp_headers/nvme.o 00:03:45.012 CXX test/cpp_headers/nvme_spec.o 00:03:45.012 CXX test/cpp_headers/nvme_zns.o 00:03:45.012 CXX test/cpp_headers/nvmf_cmd.o 00:03:45.012 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:45.012 CXX test/cpp_headers/nvmf.o 00:03:45.012 CXX test/cpp_headers/nvmf_spec.o 00:03:45.012 CXX test/cpp_headers/nvmf_transport.o 00:03:45.012 CXX test/cpp_headers/opal.o 00:03:45.012 CC examples/util/zipf/zipf.o 00:03:45.012 CC test/thread/poller_perf/poller_perf.o 00:03:45.012 CC examples/ioat/verify/verify.o 00:03:45.012 CC test/app/jsoncat/jsoncat.o 00:03:45.012 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:45.012 CC test/app/stub/stub.o 00:03:45.012 CC examples/ioat/perf/perf.o 00:03:45.012 CC app/fio/nvme/fio_plugin.o 00:03:45.012 CC test/env/memory/memory_ut.o 00:03:45.012 CC test/env/vtophys/vtophys.o 00:03:45.012 CC test/dma/test_dma/test_dma.o 00:03:45.012 CC test/app/histogram_perf/histogram_perf.o 00:03:45.012 LINK spdk_lspci 00:03:45.012 CC test/env/pci/pci_ut.o 00:03:45.289 CXX test/cpp_headers/opal_spec.o 00:03:45.289 CC test/app/bdev_svc/bdev_svc.o 00:03:45.289 CC app/fio/bdev/fio_plugin.o 00:03:45.289 LINK rpc_client_test 00:03:45.289 LINK nvmf_tgt 00:03:45.289 LINK interrupt_tgt 00:03:45.558 CC test/env/mem_callbacks/mem_callbacks.o 00:03:45.558 LINK spdk_trace_record 00:03:45.558 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:45.558 LINK jsoncat 00:03:45.558 LINK poller_perf 00:03:45.558 LINK zipf 00:03:45.558 LINK vtophys 00:03:45.558 LINK spdk_nvme_discover 00:03:45.558 LINK iscsi_tgt 00:03:45.558 LINK spdk_tgt 00:03:45.558 CXX test/cpp_headers/pci_ids.o 00:03:45.558 CXX test/cpp_headers/pipe.o 00:03:45.558 LINK stub 00:03:45.558 CXX test/cpp_headers/queue.o 00:03:45.558 CXX test/cpp_headers/reduce.o 00:03:45.558 CXX test/cpp_headers/rpc.o 00:03:45.558 CXX test/cpp_headers/scheduler.o 00:03:45.558 CXX test/cpp_headers/scsi.o 00:03:45.558 CXX test/cpp_headers/scsi_spec.o 00:03:45.558 CXX test/cpp_headers/sock.o 00:03:45.558 CXX test/cpp_headers/stdinc.o 00:03:45.558 CXX test/cpp_headers/string.o 00:03:45.558 CXX test/cpp_headers/thread.o 00:03:45.558 CXX test/cpp_headers/trace.o 00:03:45.558 CXX test/cpp_headers/trace_parser.o 00:03:45.558 CXX test/cpp_headers/tree.o 00:03:45.558 CXX test/cpp_headers/ublk.o 00:03:45.558 CXX test/cpp_headers/util.o 00:03:45.558 CXX test/cpp_headers/uuid.o 00:03:45.558 CXX test/cpp_headers/version.o 00:03:45.558 CXX test/cpp_headers/vfio_user_pci.o 00:03:45.558 CXX test/cpp_headers/vfio_user_spec.o 00:03:45.558 CXX test/cpp_headers/vhost.o 00:03:45.558 CXX test/cpp_headers/vmd.o 00:03:45.558 LINK verify 00:03:45.558 CXX test/cpp_headers/xor.o 00:03:45.558 CXX test/cpp_headers/zipf.o 00:03:45.816 LINK spdk_dd 00:03:45.816 LINK histogram_perf 00:03:45.816 LINK env_dpdk_post_init 00:03:45.816 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:45.816 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:45.816 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:45.816 LINK bdev_svc 00:03:45.816 LINK ioat_perf 00:03:45.816 LINK spdk_trace 00:03:46.074 LINK test_dma 00:03:46.074 LINK spdk_bdev 00:03:46.074 CC test/event/event_perf/event_perf.o 00:03:46.074 CC test/event/reactor_perf/reactor_perf.o 00:03:46.074 CC test/event/reactor/reactor.o 00:03:46.074 LINK spdk_nvme 00:03:46.074 CC test/event/app_repeat/app_repeat.o 00:03:46.074 CC examples/sock/hello_world/hello_sock.o 00:03:46.074 CC examples/vmd/lsvmd/lsvmd.o 00:03:46.074 CC examples/vmd/led/led.o 00:03:46.074 CC test/event/scheduler/scheduler.o 00:03:46.074 CC examples/idxd/perf/perf.o 00:03:46.074 LINK pci_ut 00:03:46.074 CC examples/thread/thread/thread_ex.o 00:03:46.074 LINK nvme_fuzz 00:03:46.074 LINK reactor 00:03:46.074 LINK event_perf 00:03:46.332 LINK reactor_perf 00:03:46.332 LINK mem_callbacks 00:03:46.332 LINK lsvmd 00:03:46.332 LINK led 00:03:46.332 LINK vhost_fuzz 00:03:46.332 LINK spdk_top 00:03:46.332 LINK app_repeat 00:03:46.332 LINK spdk_nvme_perf 00:03:46.332 CC app/vhost/vhost.o 00:03:46.332 LINK spdk_nvme_identify 00:03:46.332 LINK hello_sock 00:03:46.332 LINK scheduler 00:03:46.332 LINK idxd_perf 00:03:46.332 LINK thread 00:03:46.332 CC test/nvme/cuse/cuse.o 00:03:46.332 CC test/nvme/overhead/overhead.o 00:03:46.332 CC test/nvme/reset/reset.o 00:03:46.332 CC test/nvme/err_injection/err_injection.o 00:03:46.590 CC test/nvme/boot_partition/boot_partition.o 00:03:46.590 CC test/nvme/simple_copy/simple_copy.o 00:03:46.590 CC test/nvme/aer/aer.o 00:03:46.590 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:46.590 CC test/nvme/connect_stress/connect_stress.o 00:03:46.590 CC test/nvme/sgl/sgl.o 00:03:46.590 CC test/nvme/compliance/nvme_compliance.o 00:03:46.590 CC test/nvme/reserve/reserve.o 00:03:46.590 CC test/nvme/fused_ordering/fused_ordering.o 00:03:46.590 CC test/nvme/startup/startup.o 00:03:46.590 CC test/nvme/e2edp/nvme_dp.o 00:03:46.590 CC test/nvme/fdp/fdp.o 00:03:46.590 CC test/blobfs/mkfs/mkfs.o 00:03:46.591 CC test/accel/dif/dif.o 00:03:46.591 LINK vhost 00:03:46.591 CC test/lvol/esnap/esnap.o 00:03:46.591 LINK memory_ut 00:03:46.591 LINK err_injection 00:03:46.591 LINK startup 00:03:46.591 LINK boot_partition 00:03:46.591 LINK connect_stress 00:03:46.591 LINK doorbell_aers 00:03:46.591 LINK reserve 00:03:46.591 LINK fused_ordering 00:03:46.591 LINK mkfs 00:03:46.591 LINK simple_copy 00:03:46.849 LINK sgl 00:03:46.849 LINK aer 00:03:46.849 LINK reset 00:03:46.849 LINK overhead 00:03:46.849 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:46.849 CC examples/nvme/reconnect/reconnect.o 00:03:46.849 CC examples/nvme/arbitration/arbitration.o 00:03:46.849 CC examples/nvme/hello_world/hello_world.o 00:03:46.849 CC examples/nvme/abort/abort.o 00:03:46.849 LINK nvme_dp 00:03:46.849 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:46.849 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:46.849 CC examples/nvme/hotplug/hotplug.o 00:03:46.849 LINK nvme_compliance 00:03:46.849 LINK fdp 00:03:46.849 CC examples/accel/perf/accel_perf.o 00:03:46.849 CC examples/blob/hello_world/hello_blob.o 00:03:46.849 CC examples/blob/cli/blobcli.o 00:03:46.849 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:46.849 LINK cmb_copy 00:03:46.849 LINK pmr_persistence 00:03:47.125 LINK hotplug 00:03:47.125 LINK hello_world 00:03:47.125 LINK arbitration 00:03:47.125 LINK abort 00:03:47.125 LINK reconnect 00:03:47.125 LINK dif 00:03:47.125 LINK hello_blob 00:03:47.125 LINK nvme_manage 00:03:47.125 LINK hello_fsdev 00:03:47.382 LINK iscsi_fuzz 00:03:47.382 LINK accel_perf 00:03:47.382 LINK blobcli 00:03:47.639 LINK cuse 00:03:47.639 CC test/bdev/bdevio/bdevio.o 00:03:47.639 CC examples/bdev/hello_world/hello_bdev.o 00:03:47.639 CC examples/bdev/bdevperf/bdevperf.o 00:03:47.898 LINK bdevio 00:03:47.898 LINK hello_bdev 00:03:48.464 LINK bdevperf 00:03:48.814 CC examples/nvmf/nvmf/nvmf.o 00:03:49.173 LINK nvmf 00:03:50.111 LINK esnap 00:03:50.371 00:03:50.371 real 0m55.692s 00:03:50.371 user 8m17.036s 00:03:50.371 sys 3m40.511s 00:03:50.371 20:56:58 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:50.371 20:56:58 make -- common/autotest_common.sh@10 -- $ set +x 00:03:50.371 ************************************ 00:03:50.371 END TEST make 00:03:50.371 ************************************ 00:03:50.371 20:56:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:50.371 20:56:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:50.371 20:56:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:50.371 20:56:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.371 20:56:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:50.371 20:56:58 -- pm/common@44 -- $ pid=1033706 00:03:50.371 20:56:58 -- pm/common@50 -- $ kill -TERM 1033706 00:03:50.371 20:56:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.371 20:56:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:50.371 20:56:58 -- pm/common@44 -- $ pid=1033707 00:03:50.371 20:56:58 -- pm/common@50 -- $ kill -TERM 1033707 00:03:50.371 20:56:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.371 20:56:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:50.371 20:56:58 -- pm/common@44 -- $ pid=1033709 00:03:50.371 20:56:58 -- pm/common@50 -- $ kill -TERM 1033709 00:03:50.371 20:56:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.371 20:56:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:50.371 20:56:58 -- pm/common@44 -- $ pid=1033732 00:03:50.371 20:56:58 -- pm/common@50 -- $ sudo -E kill -TERM 1033732 00:03:50.371 20:56:58 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:50.371 20:56:58 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:50.631 20:56:58 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:50.631 20:56:58 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:50.631 20:56:58 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:50.631 20:56:58 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:50.631 20:56:58 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.631 20:56:58 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.631 20:56:58 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.631 20:56:58 -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.631 20:56:58 -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.631 20:56:58 -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.631 20:56:58 -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.631 20:56:58 -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.631 20:56:58 -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.631 20:56:58 -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.631 20:56:58 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.631 20:56:58 -- scripts/common.sh@344 -- # case "$op" in 00:03:50.631 20:56:58 -- scripts/common.sh@345 -- # : 1 00:03:50.631 20:56:58 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.631 20:56:58 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.631 20:56:58 -- scripts/common.sh@365 -- # decimal 1 00:03:50.631 20:56:58 -- scripts/common.sh@353 -- # local d=1 00:03:50.631 20:56:58 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.631 20:56:58 -- scripts/common.sh@355 -- # echo 1 00:03:50.631 20:56:58 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.631 20:56:58 -- scripts/common.sh@366 -- # decimal 2 00:03:50.631 20:56:58 -- scripts/common.sh@353 -- # local d=2 00:03:50.631 20:56:58 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.631 20:56:58 -- scripts/common.sh@355 -- # echo 2 00:03:50.631 20:56:58 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.631 20:56:58 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.631 20:56:58 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.631 20:56:58 -- scripts/common.sh@368 -- # return 0 00:03:50.631 20:56:58 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.631 20:56:58 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.631 --rc genhtml_branch_coverage=1 00:03:50.631 --rc genhtml_function_coverage=1 00:03:50.631 --rc genhtml_legend=1 00:03:50.631 --rc geninfo_all_blocks=1 00:03:50.631 --rc geninfo_unexecuted_blocks=1 00:03:50.631 00:03:50.631 ' 00:03:50.631 20:56:58 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.631 --rc genhtml_branch_coverage=1 00:03:50.631 --rc genhtml_function_coverage=1 00:03:50.631 --rc genhtml_legend=1 00:03:50.631 --rc geninfo_all_blocks=1 00:03:50.631 --rc geninfo_unexecuted_blocks=1 00:03:50.631 00:03:50.631 ' 00:03:50.631 20:56:58 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.631 --rc genhtml_branch_coverage=1 00:03:50.631 --rc genhtml_function_coverage=1 00:03:50.631 --rc genhtml_legend=1 00:03:50.631 --rc geninfo_all_blocks=1 00:03:50.631 --rc geninfo_unexecuted_blocks=1 00:03:50.631 00:03:50.631 ' 00:03:50.631 20:56:58 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.631 --rc genhtml_branch_coverage=1 00:03:50.631 --rc genhtml_function_coverage=1 00:03:50.631 --rc genhtml_legend=1 00:03:50.631 --rc geninfo_all_blocks=1 00:03:50.631 --rc geninfo_unexecuted_blocks=1 00:03:50.631 00:03:50.631 ' 00:03:50.631 20:56:58 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:50.631 20:56:58 -- nvmf/common.sh@7 -- # uname -s 00:03:50.631 20:56:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:50.631 20:56:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:50.631 20:56:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:50.631 20:56:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:50.631 20:56:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:50.631 20:56:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:50.631 20:56:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:50.631 20:56:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:50.631 20:56:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:50.631 20:56:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:50.631 20:56:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:03:50.631 20:56:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:03:50.631 20:56:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:50.631 20:56:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:50.631 20:56:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:50.631 20:56:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:50.631 20:56:58 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:50.631 20:56:58 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:50.631 20:56:58 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:50.631 20:56:58 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:50.631 20:56:58 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:50.631 20:56:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.631 20:56:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.631 20:56:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.631 20:56:58 -- paths/export.sh@5 -- # export PATH 00:03:50.631 20:56:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.631 20:56:58 -- nvmf/common.sh@51 -- # : 0 00:03:50.631 20:56:58 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:50.631 20:56:58 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:50.631 20:56:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:50.631 20:56:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:50.631 20:56:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:50.631 20:56:58 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:50.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:50.631 20:56:58 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:50.631 20:56:58 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:50.631 20:56:58 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:50.631 20:56:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:50.631 20:56:58 -- spdk/autotest.sh@32 -- # uname -s 00:03:50.631 20:56:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:50.632 20:56:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:50.632 20:56:58 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:50.632 20:56:58 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:50.632 20:56:58 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:50.632 20:56:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:50.632 20:56:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:50.632 20:56:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:50.632 20:56:58 -- spdk/autotest.sh@48 -- # udevadm_pid=1096200 00:03:50.632 20:56:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:50.632 20:56:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:50.632 20:56:58 -- pm/common@17 -- # local monitor 00:03:50.632 20:56:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.632 20:56:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.632 20:56:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.632 20:56:58 -- pm/common@21 -- # date +%s 00:03:50.632 20:56:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.632 20:56:58 -- pm/common@21 -- # date +%s 00:03:50.632 20:56:58 -- pm/common@25 -- # sleep 1 00:03:50.632 20:56:58 -- pm/common@21 -- # date +%s 00:03:50.632 20:56:58 -- pm/common@21 -- # date +%s 00:03:50.632 20:56:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733428618 00:03:50.632 20:56:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733428618 00:03:50.632 20:56:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733428618 00:03:50.632 20:56:58 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733428618 00:03:50.632 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733428618_collect-cpu-load.pm.log 00:03:50.632 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733428618_collect-vmstat.pm.log 00:03:50.891 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733428618_collect-cpu-temp.pm.log 00:03:50.891 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733428618_collect-bmc-pm.bmc.pm.log 00:03:51.830 20:56:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:51.830 20:56:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:51.830 20:56:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.830 20:56:59 -- common/autotest_common.sh@10 -- # set +x 00:03:51.830 20:56:59 -- spdk/autotest.sh@59 -- # create_test_list 00:03:51.830 20:56:59 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:51.830 20:56:59 -- common/autotest_common.sh@10 -- # set +x 00:03:51.830 20:56:59 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:51.830 20:56:59 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:51.830 20:56:59 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:51.830 20:56:59 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:51.830 20:56:59 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:51.830 20:56:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:51.830 20:56:59 -- common/autotest_common.sh@1457 -- # uname 00:03:51.830 20:56:59 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:51.830 20:56:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:51.830 20:56:59 -- common/autotest_common.sh@1477 -- # uname 00:03:51.830 20:56:59 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:51.830 20:56:59 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:51.830 20:56:59 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:51.830 lcov: LCOV version 1.15 00:03:51.830 20:56:59 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:04.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:04.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:18.912 20:57:24 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:18.912 20:57:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:18.912 20:57:24 -- common/autotest_common.sh@10 -- # set +x 00:04:18.912 20:57:24 -- spdk/autotest.sh@78 -- # rm -f 00:04:18.912 20:57:24 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.481 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:19.481 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:19.481 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:19.481 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:19.481 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:19.481 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:19.481 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:19.481 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:19.481 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:19.740 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:19.740 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:19.740 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:19.740 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:19.740 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:19.740 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:19.740 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:19.740 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:19.740 20:57:27 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:19.740 20:57:27 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:19.740 20:57:27 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:19.740 20:57:27 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:19.740 20:57:27 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:19.740 20:57:27 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:19.740 20:57:27 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:19.740 20:57:27 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:04:19.740 20:57:27 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:19.740 20:57:27 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:19.740 20:57:27 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:19.740 20:57:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.740 20:57:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:19.740 20:57:27 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:19.740 20:57:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.740 20:57:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.740 20:57:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:19.740 20:57:27 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:19.740 20:57:27 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:19.999 No valid GPT data, bailing 00:04:19.999 20:57:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:19.999 20:57:27 -- scripts/common.sh@394 -- # pt= 00:04:19.999 20:57:27 -- scripts/common.sh@395 -- # return 1 00:04:19.999 20:57:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:19.999 1+0 records in 00:04:19.999 1+0 records out 00:04:19.999 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00193405 s, 542 MB/s 00:04:19.999 20:57:27 -- spdk/autotest.sh@105 -- # sync 00:04:19.999 20:57:27 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:19.999 20:57:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:19.999 20:57:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:26.572 20:57:33 -- spdk/autotest.sh@111 -- # uname -s 00:04:26.572 20:57:33 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:26.572 20:57:33 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:26.572 20:57:33 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:28.477 Hugepages 00:04:28.477 node hugesize free / total 00:04:28.477 node0 1048576kB 0 / 0 00:04:28.477 node0 2048kB 0 / 0 00:04:28.477 node1 1048576kB 0 / 0 00:04:28.477 node1 2048kB 0 / 0 00:04:28.477 00:04:28.477 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:28.477 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:28.477 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:28.477 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:28.477 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:28.477 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:28.477 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:28.477 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:28.477 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:28.477 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:28.477 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:28.477 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:28.477 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:28.477 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:28.477 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:28.477 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:28.477 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:28.477 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:28.477 20:57:36 -- spdk/autotest.sh@117 -- # uname -s 00:04:28.477 20:57:36 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:28.477 20:57:36 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:28.477 20:57:36 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:31.768 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:31.768 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:31.768 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:31.768 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:31.768 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:31.768 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:31.768 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:31.768 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:31.768 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:31.768 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:31.768 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:31.768 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:31.768 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:31.768 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:31.768 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:31.768 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:32.707 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:32.707 20:57:40 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:34.085 20:57:41 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:34.085 20:57:41 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:34.085 20:57:41 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:34.085 20:57:41 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:34.085 20:57:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:34.085 20:57:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:34.085 20:57:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:34.085 20:57:41 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:34.085 20:57:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:34.085 20:57:41 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:34.085 20:57:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:34.085 20:57:41 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.620 Waiting for block devices as requested 00:04:36.620 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:36.878 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:36.878 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:36.878 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:36.878 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:37.143 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:37.143 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:37.143 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:37.401 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:37.401 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:37.401 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:37.659 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:37.659 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:37.659 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:37.659 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:37.917 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:37.917 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:37.917 20:57:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:37.917 20:57:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:37.917 20:57:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:37.917 20:57:45 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:37.917 20:57:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:37.917 20:57:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:37.917 20:57:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:37.917 20:57:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:37.917 20:57:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:37.917 20:57:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:37.917 20:57:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:37.917 20:57:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:37.917 20:57:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:37.917 20:57:46 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:37.917 20:57:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:37.917 20:57:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:38.175 20:57:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:38.175 20:57:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:38.175 20:57:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:38.175 20:57:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:38.175 20:57:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:38.175 20:57:46 -- common/autotest_common.sh@1543 -- # continue 00:04:38.175 20:57:46 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:38.175 20:57:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.175 20:57:46 -- common/autotest_common.sh@10 -- # set +x 00:04:38.175 20:57:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:38.175 20:57:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:38.175 20:57:46 -- common/autotest_common.sh@10 -- # set +x 00:04:38.175 20:57:46 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:41.463 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:41.463 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:41.463 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:41.463 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:41.463 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:41.463 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:41.463 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:41.463 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:41.463 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:41.463 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:41.463 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:41.463 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:41.463 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:41.463 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:41.463 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:41.463 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:42.401 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:42.660 20:57:50 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:42.660 20:57:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:42.660 20:57:50 -- common/autotest_common.sh@10 -- # set +x 00:04:42.660 20:57:50 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:42.660 20:57:50 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:42.660 20:57:50 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:42.660 20:57:50 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:42.660 20:57:50 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:42.660 20:57:50 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:42.660 20:57:50 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:42.660 20:57:50 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:42.660 20:57:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:42.660 20:57:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:42.660 20:57:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:42.660 20:57:50 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:42.660 20:57:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:42.660 20:57:50 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:42.660 20:57:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:42.660 20:57:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:42.660 20:57:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:42.660 20:57:50 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:42.660 20:57:50 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:42.660 20:57:50 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:42.660 20:57:50 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:42.660 20:57:50 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:42.660 20:57:50 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:42.660 20:57:50 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1110926 00:04:42.660 20:57:50 -- common/autotest_common.sh@1585 -- # waitforlisten 1110926 00:04:42.660 20:57:50 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.660 20:57:50 -- common/autotest_common.sh@835 -- # '[' -z 1110926 ']' 00:04:42.660 20:57:50 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.660 20:57:50 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.660 20:57:50 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.660 20:57:50 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.660 20:57:50 -- common/autotest_common.sh@10 -- # set +x 00:04:42.660 [2024-12-05 20:57:50.696267] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:04:42.660 [2024-12-05 20:57:50.696312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1110926 ] 00:04:42.919 [2024-12-05 20:57:50.772102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.919 [2024-12-05 20:57:50.814687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.178 20:57:51 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.178 20:57:51 -- common/autotest_common.sh@868 -- # return 0 00:04:43.178 20:57:51 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:43.178 20:57:51 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:43.178 20:57:51 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:46.468 nvme0n1 00:04:46.468 20:57:54 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:46.468 [2024-12-05 20:57:54.216896] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:46.468 request: 00:04:46.468 { 00:04:46.468 "nvme_ctrlr_name": "nvme0", 00:04:46.468 "password": "test", 00:04:46.468 "method": "bdev_nvme_opal_revert", 00:04:46.468 "req_id": 1 00:04:46.468 } 00:04:46.468 Got JSON-RPC error response 00:04:46.468 response: 00:04:46.468 { 00:04:46.468 "code": -32602, 00:04:46.468 "message": "Invalid parameters" 00:04:46.468 } 00:04:46.468 20:57:54 -- common/autotest_common.sh@1591 -- # true 00:04:46.468 20:57:54 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:46.468 20:57:54 -- common/autotest_common.sh@1595 -- # killprocess 1110926 00:04:46.468 20:57:54 -- common/autotest_common.sh@954 -- # '[' -z 1110926 ']' 00:04:46.468 20:57:54 -- common/autotest_common.sh@958 -- # kill -0 1110926 00:04:46.468 20:57:54 -- common/autotest_common.sh@959 -- # uname 00:04:46.468 20:57:54 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.468 20:57:54 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1110926 00:04:46.468 20:57:54 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.468 20:57:54 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.468 20:57:54 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1110926' 00:04:46.468 killing process with pid 1110926 00:04:46.468 20:57:54 -- common/autotest_common.sh@973 -- # kill 1110926 00:04:46.468 20:57:54 -- common/autotest_common.sh@978 -- # wait 1110926 00:04:48.370 20:57:56 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:48.370 20:57:56 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:48.370 20:57:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:48.370 20:57:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:48.370 20:57:56 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:48.370 20:57:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.370 20:57:56 -- common/autotest_common.sh@10 -- # set +x 00:04:48.370 20:57:56 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:48.370 20:57:56 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:48.370 20:57:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.370 20:57:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.370 20:57:56 -- common/autotest_common.sh@10 -- # set +x 00:04:48.629 ************************************ 00:04:48.629 START TEST env 00:04:48.629 ************************************ 00:04:48.629 20:57:56 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:48.629 * Looking for test storage... 00:04:48.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:48.629 20:57:56 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:48.629 20:57:56 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:48.629 20:57:56 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:48.629 20:57:56 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:48.629 20:57:56 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.629 20:57:56 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.629 20:57:56 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.629 20:57:56 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.629 20:57:56 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.629 20:57:56 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.629 20:57:56 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.629 20:57:56 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.629 20:57:56 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.629 20:57:56 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.629 20:57:56 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.629 20:57:56 env -- scripts/common.sh@344 -- # case "$op" in 00:04:48.629 20:57:56 env -- scripts/common.sh@345 -- # : 1 00:04:48.629 20:57:56 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.629 20:57:56 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.629 20:57:56 env -- scripts/common.sh@365 -- # decimal 1 00:04:48.629 20:57:56 env -- scripts/common.sh@353 -- # local d=1 00:04:48.629 20:57:56 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.629 20:57:56 env -- scripts/common.sh@355 -- # echo 1 00:04:48.629 20:57:56 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.629 20:57:56 env -- scripts/common.sh@366 -- # decimal 2 00:04:48.629 20:57:56 env -- scripts/common.sh@353 -- # local d=2 00:04:48.629 20:57:56 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.629 20:57:56 env -- scripts/common.sh@355 -- # echo 2 00:04:48.629 20:57:56 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.629 20:57:56 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.629 20:57:56 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.629 20:57:56 env -- scripts/common.sh@368 -- # return 0 00:04:48.629 20:57:56 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.629 20:57:56 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:48.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.629 --rc genhtml_branch_coverage=1 00:04:48.629 --rc genhtml_function_coverage=1 00:04:48.629 --rc genhtml_legend=1 00:04:48.629 --rc geninfo_all_blocks=1 00:04:48.629 --rc geninfo_unexecuted_blocks=1 00:04:48.629 00:04:48.629 ' 00:04:48.629 20:57:56 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:48.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.629 --rc genhtml_branch_coverage=1 00:04:48.629 --rc genhtml_function_coverage=1 00:04:48.629 --rc genhtml_legend=1 00:04:48.629 --rc geninfo_all_blocks=1 00:04:48.629 --rc geninfo_unexecuted_blocks=1 00:04:48.629 00:04:48.629 ' 00:04:48.629 20:57:56 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:48.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.629 --rc genhtml_branch_coverage=1 00:04:48.629 --rc genhtml_function_coverage=1 00:04:48.629 --rc genhtml_legend=1 00:04:48.629 --rc geninfo_all_blocks=1 00:04:48.629 --rc geninfo_unexecuted_blocks=1 00:04:48.629 00:04:48.629 ' 00:04:48.629 20:57:56 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:48.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.629 --rc genhtml_branch_coverage=1 00:04:48.629 --rc genhtml_function_coverage=1 00:04:48.629 --rc genhtml_legend=1 00:04:48.629 --rc geninfo_all_blocks=1 00:04:48.629 --rc geninfo_unexecuted_blocks=1 00:04:48.629 00:04:48.629 ' 00:04:48.629 20:57:56 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:48.629 20:57:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.629 20:57:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.629 20:57:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.629 ************************************ 00:04:48.629 START TEST env_memory 00:04:48.629 ************************************ 00:04:48.629 20:57:56 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:48.629 00:04:48.629 00:04:48.629 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.629 http://cunit.sourceforge.net/ 00:04:48.629 00:04:48.629 00:04:48.629 Suite: memory 00:04:48.889 Test: alloc and free memory map ...[2024-12-05 20:57:56.747430] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:48.889 passed 00:04:48.889 Test: mem map translation ...[2024-12-05 20:57:56.765645] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:48.889 [2024-12-05 20:57:56.765658] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:48.889 [2024-12-05 20:57:56.765691] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:48.889 [2024-12-05 20:57:56.765697] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:48.889 passed 00:04:48.889 Test: mem map registration ...[2024-12-05 20:57:56.801397] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:48.889 [2024-12-05 20:57:56.801409] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:48.889 passed 00:04:48.889 Test: mem map adjacent registrations ...passed 00:04:48.889 00:04:48.889 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.889 suites 1 1 n/a 0 0 00:04:48.889 tests 4 4 4 0 0 00:04:48.889 asserts 152 152 152 0 n/a 00:04:48.889 00:04:48.889 Elapsed time = 0.133 seconds 00:04:48.889 00:04:48.889 real 0m0.146s 00:04:48.889 user 0m0.135s 00:04:48.889 sys 0m0.010s 00:04:48.889 20:57:56 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.889 20:57:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:48.889 ************************************ 00:04:48.889 END TEST env_memory 00:04:48.889 ************************************ 00:04:48.889 20:57:56 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:48.889 20:57:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.889 20:57:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.889 20:57:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.889 ************************************ 00:04:48.889 START TEST env_vtophys 00:04:48.889 ************************************ 00:04:48.889 20:57:56 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:48.889 EAL: lib.eal log level changed from notice to debug 00:04:48.889 EAL: Detected lcore 0 as core 0 on socket 0 00:04:48.889 EAL: Detected lcore 1 as core 1 on socket 0 00:04:48.889 EAL: Detected lcore 2 as core 2 on socket 0 00:04:48.889 EAL: Detected lcore 3 as core 3 on socket 0 00:04:48.889 EAL: Detected lcore 4 as core 4 on socket 0 00:04:48.889 EAL: Detected lcore 5 as core 5 on socket 0 00:04:48.889 EAL: Detected lcore 6 as core 6 on socket 0 00:04:48.889 EAL: Detected lcore 7 as core 8 on socket 0 00:04:48.889 EAL: Detected lcore 8 as core 9 on socket 0 00:04:48.889 EAL: Detected lcore 9 as core 10 on socket 0 00:04:48.889 EAL: Detected lcore 10 as core 11 on socket 0 00:04:48.889 EAL: Detected lcore 11 as core 12 on socket 0 00:04:48.889 EAL: Detected lcore 12 as core 13 on socket 0 00:04:48.889 EAL: Detected lcore 13 as core 16 on socket 0 00:04:48.889 EAL: Detected lcore 14 as core 17 on socket 0 00:04:48.889 EAL: Detected lcore 15 as core 18 on socket 0 00:04:48.889 EAL: Detected lcore 16 as core 19 on socket 0 00:04:48.889 EAL: Detected lcore 17 as core 20 on socket 0 00:04:48.889 EAL: Detected lcore 18 as core 21 on socket 0 00:04:48.889 EAL: Detected lcore 19 as core 25 on socket 0 00:04:48.889 EAL: Detected lcore 20 as core 26 on socket 0 00:04:48.889 EAL: Detected lcore 21 as core 27 on socket 0 00:04:48.889 EAL: Detected lcore 22 as core 28 on socket 0 00:04:48.889 EAL: Detected lcore 23 as core 29 on socket 0 00:04:48.889 EAL: Detected lcore 24 as core 0 on socket 1 00:04:48.889 EAL: Detected lcore 25 as core 1 on socket 1 00:04:48.889 EAL: Detected lcore 26 as core 2 on socket 1 00:04:48.889 EAL: Detected lcore 27 as core 3 on socket 1 00:04:48.889 EAL: Detected lcore 28 as core 4 on socket 1 00:04:48.889 EAL: Detected lcore 29 as core 5 on socket 1 00:04:48.889 EAL: Detected lcore 30 as core 6 on socket 1 00:04:48.889 EAL: Detected lcore 31 as core 8 on socket 1 00:04:48.889 EAL: Detected lcore 32 as core 10 on socket 1 00:04:48.889 EAL: Detected lcore 33 as core 11 on socket 1 00:04:48.889 EAL: Detected lcore 34 as core 12 on socket 1 00:04:48.889 EAL: Detected lcore 35 as core 13 on socket 1 00:04:48.889 EAL: Detected lcore 36 as core 16 on socket 1 00:04:48.889 EAL: Detected lcore 37 as core 17 on socket 1 00:04:48.889 EAL: Detected lcore 38 as core 18 on socket 1 00:04:48.889 EAL: Detected lcore 39 as core 19 on socket 1 00:04:48.889 EAL: Detected lcore 40 as core 20 on socket 1 00:04:48.889 EAL: Detected lcore 41 as core 21 on socket 1 00:04:48.889 EAL: Detected lcore 42 as core 24 on socket 1 00:04:48.889 EAL: Detected lcore 43 as core 25 on socket 1 00:04:48.889 EAL: Detected lcore 44 as core 26 on socket 1 00:04:48.889 EAL: Detected lcore 45 as core 27 on socket 1 00:04:48.889 EAL: Detected lcore 46 as core 28 on socket 1 00:04:48.889 EAL: Detected lcore 47 as core 29 on socket 1 00:04:48.889 EAL: Detected lcore 48 as core 0 on socket 0 00:04:48.889 EAL: Detected lcore 49 as core 1 on socket 0 00:04:48.889 EAL: Detected lcore 50 as core 2 on socket 0 00:04:48.889 EAL: Detected lcore 51 as core 3 on socket 0 00:04:48.889 EAL: Detected lcore 52 as core 4 on socket 0 00:04:48.889 EAL: Detected lcore 53 as core 5 on socket 0 00:04:48.889 EAL: Detected lcore 54 as core 6 on socket 0 00:04:48.889 EAL: Detected lcore 55 as core 8 on socket 0 00:04:48.889 EAL: Detected lcore 56 as core 9 on socket 0 00:04:48.889 EAL: Detected lcore 57 as core 10 on socket 0 00:04:48.889 EAL: Detected lcore 58 as core 11 on socket 0 00:04:48.889 EAL: Detected lcore 59 as core 12 on socket 0 00:04:48.889 EAL: Detected lcore 60 as core 13 on socket 0 00:04:48.889 EAL: Detected lcore 61 as core 16 on socket 0 00:04:48.889 EAL: Detected lcore 62 as core 17 on socket 0 00:04:48.889 EAL: Detected lcore 63 as core 18 on socket 0 00:04:48.889 EAL: Detected lcore 64 as core 19 on socket 0 00:04:48.889 EAL: Detected lcore 65 as core 20 on socket 0 00:04:48.889 EAL: Detected lcore 66 as core 21 on socket 0 00:04:48.889 EAL: Detected lcore 67 as core 25 on socket 0 00:04:48.889 EAL: Detected lcore 68 as core 26 on socket 0 00:04:48.889 EAL: Detected lcore 69 as core 27 on socket 0 00:04:48.890 EAL: Detected lcore 70 as core 28 on socket 0 00:04:48.890 EAL: Detected lcore 71 as core 29 on socket 0 00:04:48.890 EAL: Detected lcore 72 as core 0 on socket 1 00:04:48.890 EAL: Detected lcore 73 as core 1 on socket 1 00:04:48.890 EAL: Detected lcore 74 as core 2 on socket 1 00:04:48.890 EAL: Detected lcore 75 as core 3 on socket 1 00:04:48.890 EAL: Detected lcore 76 as core 4 on socket 1 00:04:48.890 EAL: Detected lcore 77 as core 5 on socket 1 00:04:48.890 EAL: Detected lcore 78 as core 6 on socket 1 00:04:48.890 EAL: Detected lcore 79 as core 8 on socket 1 00:04:48.890 EAL: Detected lcore 80 as core 10 on socket 1 00:04:48.890 EAL: Detected lcore 81 as core 11 on socket 1 00:04:48.890 EAL: Detected lcore 82 as core 12 on socket 1 00:04:48.890 EAL: Detected lcore 83 as core 13 on socket 1 00:04:48.890 EAL: Detected lcore 84 as core 16 on socket 1 00:04:48.890 EAL: Detected lcore 85 as core 17 on socket 1 00:04:48.890 EAL: Detected lcore 86 as core 18 on socket 1 00:04:48.890 EAL: Detected lcore 87 as core 19 on socket 1 00:04:48.890 EAL: Detected lcore 88 as core 20 on socket 1 00:04:48.890 EAL: Detected lcore 89 as core 21 on socket 1 00:04:48.890 EAL: Detected lcore 90 as core 24 on socket 1 00:04:48.890 EAL: Detected lcore 91 as core 25 on socket 1 00:04:48.890 EAL: Detected lcore 92 as core 26 on socket 1 00:04:48.890 EAL: Detected lcore 93 as core 27 on socket 1 00:04:48.890 EAL: Detected lcore 94 as core 28 on socket 1 00:04:48.890 EAL: Detected lcore 95 as core 29 on socket 1 00:04:48.890 EAL: Maximum logical cores by configuration: 128 00:04:48.890 EAL: Detected CPU lcores: 96 00:04:48.890 EAL: Detected NUMA nodes: 2 00:04:48.890 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:48.890 EAL: Detected shared linkage of DPDK 00:04:48.890 EAL: No shared files mode enabled, IPC will be disabled 00:04:48.890 EAL: Bus pci wants IOVA as 'DC' 00:04:48.890 EAL: Buses did not request a specific IOVA mode. 00:04:48.890 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:48.890 EAL: Selected IOVA mode 'VA' 00:04:48.890 EAL: Probing VFIO support... 00:04:48.890 EAL: IOMMU type 1 (Type 1) is supported 00:04:48.890 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:48.890 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:48.890 EAL: VFIO support initialized 00:04:48.890 EAL: Ask a virtual area of 0x2e000 bytes 00:04:48.890 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:48.890 EAL: Setting up physically contiguous memory... 00:04:48.890 EAL: Setting maximum number of open files to 524288 00:04:48.890 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:48.890 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:48.890 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:48.890 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.890 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:48.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.890 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.890 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:48.890 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:48.890 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.890 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:48.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.890 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.890 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:48.890 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:48.890 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.890 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:48.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.890 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.890 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:48.890 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:48.890 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.890 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:48.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.890 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.890 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:48.890 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:48.890 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:48.890 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.890 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:48.890 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.890 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.890 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:48.890 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:48.890 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.890 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:48.890 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.890 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.890 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:48.890 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:48.890 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.890 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:48.890 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.890 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.890 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:48.890 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:48.890 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.890 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:48.890 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.890 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.890 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:48.890 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:48.890 EAL: Hugepages will be freed exactly as allocated. 00:04:48.890 EAL: No shared files mode enabled, IPC is disabled 00:04:48.890 EAL: No shared files mode enabled, IPC is disabled 00:04:48.890 EAL: TSC frequency is ~2100000 KHz 00:04:48.890 EAL: Main lcore 0 is ready (tid=7f4bdcf56a00;cpuset=[0]) 00:04:48.890 EAL: Trying to obtain current memory policy. 00:04:48.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.890 EAL: Restoring previous memory policy: 0 00:04:48.890 EAL: request: mp_malloc_sync 00:04:48.890 EAL: No shared files mode enabled, IPC is disabled 00:04:48.890 EAL: Heap on socket 0 was expanded by 2MB 00:04:48.890 EAL: No shared files mode enabled, IPC is disabled 00:04:49.150 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:49.150 EAL: Mem event callback 'spdk:(nil)' registered 00:04:49.150 00:04:49.150 00:04:49.150 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.150 http://cunit.sourceforge.net/ 00:04:49.150 00:04:49.150 00:04:49.150 Suite: components_suite 00:04:49.150 Test: vtophys_malloc_test ...passed 00:04:49.150 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:49.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.150 EAL: Restoring previous memory policy: 4 00:04:49.150 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.150 EAL: request: mp_malloc_sync 00:04:49.150 EAL: No shared files mode enabled, IPC is disabled 00:04:49.150 EAL: Heap on socket 0 was expanded by 4MB 00:04:49.150 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.150 EAL: request: mp_malloc_sync 00:04:49.150 EAL: No shared files mode enabled, IPC is disabled 00:04:49.150 EAL: Heap on socket 0 was shrunk by 4MB 00:04:49.150 EAL: Trying to obtain current memory policy. 00:04:49.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.150 EAL: Restoring previous memory policy: 4 00:04:49.150 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.150 EAL: request: mp_malloc_sync 00:04:49.150 EAL: No shared files mode enabled, IPC is disabled 00:04:49.150 EAL: Heap on socket 0 was expanded by 6MB 00:04:49.150 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.150 EAL: request: mp_malloc_sync 00:04:49.150 EAL: No shared files mode enabled, IPC is disabled 00:04:49.150 EAL: Heap on socket 0 was shrunk by 6MB 00:04:49.150 EAL: Trying to obtain current memory policy. 00:04:49.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.150 EAL: Restoring previous memory policy: 4 00:04:49.150 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.150 EAL: request: mp_malloc_sync 00:04:49.150 EAL: No shared files mode enabled, IPC is disabled 00:04:49.150 EAL: Heap on socket 0 was expanded by 10MB 00:04:49.150 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.150 EAL: request: mp_malloc_sync 00:04:49.150 EAL: No shared files mode enabled, IPC is disabled 00:04:49.150 EAL: Heap on socket 0 was shrunk by 10MB 00:04:49.150 EAL: Trying to obtain current memory policy. 00:04:49.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.150 EAL: Restoring previous memory policy: 4 00:04:49.150 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.151 EAL: request: mp_malloc_sync 00:04:49.151 EAL: No shared files mode enabled, IPC is disabled 00:04:49.151 EAL: Heap on socket 0 was expanded by 18MB 00:04:49.151 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.151 EAL: request: mp_malloc_sync 00:04:49.151 EAL: No shared files mode enabled, IPC is disabled 00:04:49.151 EAL: Heap on socket 0 was shrunk by 18MB 00:04:49.151 EAL: Trying to obtain current memory policy. 00:04:49.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.151 EAL: Restoring previous memory policy: 4 00:04:49.151 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.151 EAL: request: mp_malloc_sync 00:04:49.151 EAL: No shared files mode enabled, IPC is disabled 00:04:49.151 EAL: Heap on socket 0 was expanded by 34MB 00:04:49.151 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.151 EAL: request: mp_malloc_sync 00:04:49.151 EAL: No shared files mode enabled, IPC is disabled 00:04:49.151 EAL: Heap on socket 0 was shrunk by 34MB 00:04:49.151 EAL: Trying to obtain current memory policy. 00:04:49.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.151 EAL: Restoring previous memory policy: 4 00:04:49.151 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.151 EAL: request: mp_malloc_sync 00:04:49.151 EAL: No shared files mode enabled, IPC is disabled 00:04:49.151 EAL: Heap on socket 0 was expanded by 66MB 00:04:49.151 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.151 EAL: request: mp_malloc_sync 00:04:49.151 EAL: No shared files mode enabled, IPC is disabled 00:04:49.151 EAL: Heap on socket 0 was shrunk by 66MB 00:04:49.151 EAL: Trying to obtain current memory policy. 00:04:49.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.151 EAL: Restoring previous memory policy: 4 00:04:49.151 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.151 EAL: request: mp_malloc_sync 00:04:49.151 EAL: No shared files mode enabled, IPC is disabled 00:04:49.151 EAL: Heap on socket 0 was expanded by 130MB 00:04:49.151 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.151 EAL: request: mp_malloc_sync 00:04:49.151 EAL: No shared files mode enabled, IPC is disabled 00:04:49.151 EAL: Heap on socket 0 was shrunk by 130MB 00:04:49.151 EAL: Trying to obtain current memory policy. 00:04:49.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.151 EAL: Restoring previous memory policy: 4 00:04:49.151 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.151 EAL: request: mp_malloc_sync 00:04:49.151 EAL: No shared files mode enabled, IPC is disabled 00:04:49.151 EAL: Heap on socket 0 was expanded by 258MB 00:04:49.151 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.151 EAL: request: mp_malloc_sync 00:04:49.151 EAL: No shared files mode enabled, IPC is disabled 00:04:49.151 EAL: Heap on socket 0 was shrunk by 258MB 00:04:49.151 EAL: Trying to obtain current memory policy. 00:04:49.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.410 EAL: Restoring previous memory policy: 4 00:04:49.410 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.410 EAL: request: mp_malloc_sync 00:04:49.410 EAL: No shared files mode enabled, IPC is disabled 00:04:49.410 EAL: Heap on socket 0 was expanded by 514MB 00:04:49.410 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.410 EAL: request: mp_malloc_sync 00:04:49.410 EAL: No shared files mode enabled, IPC is disabled 00:04:49.410 EAL: Heap on socket 0 was shrunk by 514MB 00:04:49.410 EAL: Trying to obtain current memory policy. 00:04:49.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.669 EAL: Restoring previous memory policy: 4 00:04:49.669 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.669 EAL: request: mp_malloc_sync 00:04:49.669 EAL: No shared files mode enabled, IPC is disabled 00:04:49.669 EAL: Heap on socket 0 was expanded by 1026MB 00:04:49.928 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.928 EAL: request: mp_malloc_sync 00:04:49.928 EAL: No shared files mode enabled, IPC is disabled 00:04:49.928 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:49.928 passed 00:04:49.928 00:04:49.928 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.928 suites 1 1 n/a 0 0 00:04:49.928 tests 2 2 2 0 0 00:04:49.928 asserts 497 497 497 0 n/a 00:04:49.928 00:04:49.928 Elapsed time = 0.969 seconds 00:04:49.928 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.928 EAL: request: mp_malloc_sync 00:04:49.928 EAL: No shared files mode enabled, IPC is disabled 00:04:49.928 EAL: Heap on socket 0 was shrunk by 2MB 00:04:49.928 EAL: No shared files mode enabled, IPC is disabled 00:04:49.928 EAL: No shared files mode enabled, IPC is disabled 00:04:49.928 EAL: No shared files mode enabled, IPC is disabled 00:04:49.928 00:04:49.928 real 0m1.095s 00:04:49.928 user 0m0.644s 00:04:49.928 sys 0m0.426s 00:04:49.928 20:57:58 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.928 20:57:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:49.928 ************************************ 00:04:49.928 END TEST env_vtophys 00:04:49.928 ************************************ 00:04:50.187 20:57:58 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:50.187 20:57:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.187 20:57:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.187 20:57:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.187 ************************************ 00:04:50.187 START TEST env_pci 00:04:50.187 ************************************ 00:04:50.187 20:57:58 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:50.187 00:04:50.187 00:04:50.187 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.187 http://cunit.sourceforge.net/ 00:04:50.187 00:04:50.187 00:04:50.187 Suite: pci 00:04:50.187 Test: pci_hook ...[2024-12-05 20:57:58.106460] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1112241 has claimed it 00:04:50.187 EAL: Cannot find device (10000:00:01.0) 00:04:50.187 EAL: Failed to attach device on primary process 00:04:50.187 passed 00:04:50.187 00:04:50.187 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.187 suites 1 1 n/a 0 0 00:04:50.187 tests 1 1 1 0 0 00:04:50.187 asserts 25 25 25 0 n/a 00:04:50.187 00:04:50.187 Elapsed time = 0.028 seconds 00:04:50.187 00:04:50.187 real 0m0.048s 00:04:50.187 user 0m0.016s 00:04:50.187 sys 0m0.032s 00:04:50.187 20:57:58 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.188 20:57:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:50.188 ************************************ 00:04:50.188 END TEST env_pci 00:04:50.188 ************************************ 00:04:50.188 20:57:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:50.188 20:57:58 env -- env/env.sh@15 -- # uname 00:04:50.188 20:57:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:50.188 20:57:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:50.188 20:57:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:50.188 20:57:58 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:50.188 20:57:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.188 20:57:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.188 ************************************ 00:04:50.188 START TEST env_dpdk_post_init 00:04:50.188 ************************************ 00:04:50.188 20:57:58 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:50.188 EAL: Detected CPU lcores: 96 00:04:50.188 EAL: Detected NUMA nodes: 2 00:04:50.188 EAL: Detected shared linkage of DPDK 00:04:50.188 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:50.188 EAL: Selected IOVA mode 'VA' 00:04:50.188 EAL: VFIO support initialized 00:04:50.188 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:50.446 EAL: Using IOMMU type 1 (Type 1) 00:04:50.446 EAL: Ignore mapping IO port bar(1) 00:04:50.446 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:50.446 EAL: Ignore mapping IO port bar(1) 00:04:50.446 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:50.446 EAL: Ignore mapping IO port bar(1) 00:04:50.446 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:50.446 EAL: Ignore mapping IO port bar(1) 00:04:50.446 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:50.446 EAL: Ignore mapping IO port bar(1) 00:04:50.446 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:50.446 EAL: Ignore mapping IO port bar(1) 00:04:50.446 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:50.446 EAL: Ignore mapping IO port bar(1) 00:04:50.446 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:50.446 EAL: Ignore mapping IO port bar(1) 00:04:50.446 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:51.384 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:51.384 EAL: Ignore mapping IO port bar(1) 00:04:51.384 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:51.384 EAL: Ignore mapping IO port bar(1) 00:04:51.384 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:51.384 EAL: Ignore mapping IO port bar(1) 00:04:51.384 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:51.384 EAL: Ignore mapping IO port bar(1) 00:04:51.384 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:51.384 EAL: Ignore mapping IO port bar(1) 00:04:51.384 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:51.384 EAL: Ignore mapping IO port bar(1) 00:04:51.384 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:51.384 EAL: Ignore mapping IO port bar(1) 00:04:51.384 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:51.384 EAL: Ignore mapping IO port bar(1) 00:04:51.384 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:54.670 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:54.670 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:55.239 Starting DPDK initialization... 00:04:55.239 Starting SPDK post initialization... 00:04:55.239 SPDK NVMe probe 00:04:55.239 Attaching to 0000:5e:00.0 00:04:55.239 Attached to 0000:5e:00.0 00:04:55.239 Cleaning up... 00:04:55.239 00:04:55.239 real 0m4.850s 00:04:55.239 user 0m3.419s 00:04:55.239 sys 0m0.499s 00:04:55.239 20:58:03 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.239 20:58:03 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.239 ************************************ 00:04:55.239 END TEST env_dpdk_post_init 00:04:55.239 ************************************ 00:04:55.239 20:58:03 env -- env/env.sh@26 -- # uname 00:04:55.239 20:58:03 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:55.239 20:58:03 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.239 20:58:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.239 20:58:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.239 20:58:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.239 ************************************ 00:04:55.239 START TEST env_mem_callbacks 00:04:55.239 ************************************ 00:04:55.239 20:58:03 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.239 EAL: Detected CPU lcores: 96 00:04:55.239 EAL: Detected NUMA nodes: 2 00:04:55.239 EAL: Detected shared linkage of DPDK 00:04:55.239 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:55.239 EAL: Selected IOVA mode 'VA' 00:04:55.239 EAL: VFIO support initialized 00:04:55.239 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:55.239 00:04:55.239 00:04:55.239 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.239 http://cunit.sourceforge.net/ 00:04:55.239 00:04:55.239 00:04:55.239 Suite: memory 00:04:55.239 Test: test ... 00:04:55.240 register 0x200000200000 2097152 00:04:55.240 malloc 3145728 00:04:55.240 register 0x200000400000 4194304 00:04:55.240 buf 0x200000500000 len 3145728 PASSED 00:04:55.240 malloc 64 00:04:55.240 buf 0x2000004fff40 len 64 PASSED 00:04:55.240 malloc 4194304 00:04:55.240 register 0x200000800000 6291456 00:04:55.240 buf 0x200000a00000 len 4194304 PASSED 00:04:55.240 free 0x200000500000 3145728 00:04:55.240 free 0x2000004fff40 64 00:04:55.240 unregister 0x200000400000 4194304 PASSED 00:04:55.240 free 0x200000a00000 4194304 00:04:55.240 unregister 0x200000800000 6291456 PASSED 00:04:55.240 malloc 8388608 00:04:55.240 register 0x200000400000 10485760 00:04:55.240 buf 0x200000600000 len 8388608 PASSED 00:04:55.240 free 0x200000600000 8388608 00:04:55.240 unregister 0x200000400000 10485760 PASSED 00:04:55.240 passed 00:04:55.240 00:04:55.240 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.240 suites 1 1 n/a 0 0 00:04:55.240 tests 1 1 1 0 0 00:04:55.240 asserts 15 15 15 0 n/a 00:04:55.240 00:04:55.240 Elapsed time = 0.008 seconds 00:04:55.240 00:04:55.240 real 0m0.058s 00:04:55.240 user 0m0.019s 00:04:55.240 sys 0m0.039s 00:04:55.240 20:58:03 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.240 20:58:03 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:55.240 ************************************ 00:04:55.240 END TEST env_mem_callbacks 00:04:55.240 ************************************ 00:04:55.240 00:04:55.240 real 0m6.734s 00:04:55.240 user 0m4.472s 00:04:55.240 sys 0m1.343s 00:04:55.240 20:58:03 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.240 20:58:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.240 ************************************ 00:04:55.240 END TEST env 00:04:55.240 ************************************ 00:04:55.240 20:58:03 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:55.240 20:58:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.240 20:58:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.240 20:58:03 -- common/autotest_common.sh@10 -- # set +x 00:04:55.240 ************************************ 00:04:55.240 START TEST rpc 00:04:55.240 ************************************ 00:04:55.240 20:58:03 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:55.499 * Looking for test storage... 00:04:55.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:55.499 20:58:03 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.499 20:58:03 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.499 20:58:03 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.499 20:58:03 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.499 20:58:03 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.499 20:58:03 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.499 20:58:03 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.499 20:58:03 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.499 20:58:03 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.499 20:58:03 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.499 20:58:03 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.499 20:58:03 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.499 20:58:03 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.499 20:58:03 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.499 20:58:03 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.499 20:58:03 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:55.499 20:58:03 rpc -- scripts/common.sh@345 -- # : 1 00:04:55.499 20:58:03 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.499 20:58:03 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.499 20:58:03 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:55.499 20:58:03 rpc -- scripts/common.sh@353 -- # local d=1 00:04:55.499 20:58:03 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.499 20:58:03 rpc -- scripts/common.sh@355 -- # echo 1 00:04:55.499 20:58:03 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.499 20:58:03 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:55.499 20:58:03 rpc -- scripts/common.sh@353 -- # local d=2 00:04:55.499 20:58:03 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.499 20:58:03 rpc -- scripts/common.sh@355 -- # echo 2 00:04:55.499 20:58:03 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.499 20:58:03 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.499 20:58:03 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.499 20:58:03 rpc -- scripts/common.sh@368 -- # return 0 00:04:55.499 20:58:03 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.499 20:58:03 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.499 --rc genhtml_branch_coverage=1 00:04:55.499 --rc genhtml_function_coverage=1 00:04:55.499 --rc genhtml_legend=1 00:04:55.499 --rc geninfo_all_blocks=1 00:04:55.499 --rc geninfo_unexecuted_blocks=1 00:04:55.499 00:04:55.499 ' 00:04:55.499 20:58:03 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.499 --rc genhtml_branch_coverage=1 00:04:55.499 --rc genhtml_function_coverage=1 00:04:55.499 --rc genhtml_legend=1 00:04:55.499 --rc geninfo_all_blocks=1 00:04:55.499 --rc geninfo_unexecuted_blocks=1 00:04:55.499 00:04:55.499 ' 00:04:55.499 20:58:03 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.499 --rc genhtml_branch_coverage=1 00:04:55.499 --rc genhtml_function_coverage=1 00:04:55.499 --rc genhtml_legend=1 00:04:55.499 --rc geninfo_all_blocks=1 00:04:55.499 --rc geninfo_unexecuted_blocks=1 00:04:55.499 00:04:55.499 ' 00:04:55.499 20:58:03 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.499 --rc genhtml_branch_coverage=1 00:04:55.500 --rc genhtml_function_coverage=1 00:04:55.500 --rc genhtml_legend=1 00:04:55.500 --rc geninfo_all_blocks=1 00:04:55.500 --rc geninfo_unexecuted_blocks=1 00:04:55.500 00:04:55.500 ' 00:04:55.500 20:58:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1113287 00:04:55.500 20:58:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.500 20:58:03 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:55.500 20:58:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1113287 00:04:55.500 20:58:03 rpc -- common/autotest_common.sh@835 -- # '[' -z 1113287 ']' 00:04:55.500 20:58:03 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.500 20:58:03 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.500 20:58:03 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.500 20:58:03 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.500 20:58:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.500 [2024-12-05 20:58:03.530234] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:04:55.500 [2024-12-05 20:58:03.530285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1113287 ] 00:04:55.500 [2024-12-05 20:58:03.603177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.758 [2024-12-05 20:58:03.645478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:55.758 [2024-12-05 20:58:03.645509] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1113287' to capture a snapshot of events at runtime. 00:04:55.758 [2024-12-05 20:58:03.645517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:55.758 [2024-12-05 20:58:03.645523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:55.759 [2024-12-05 20:58:03.645529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1113287 for offline analysis/debug. 00:04:55.759 [2024-12-05 20:58:03.645977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.759 20:58:03 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.759 20:58:03 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:55.759 20:58:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:55.759 20:58:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:55.759 20:58:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:55.759 20:58:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:56.017 20:58:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.017 20:58:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.017 20:58:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.017 ************************************ 00:04:56.017 START TEST rpc_integrity 00:04:56.017 ************************************ 00:04:56.017 20:58:03 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:56.017 20:58:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:56.017 20:58:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.017 20:58:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.017 20:58:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.017 20:58:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:56.018 20:58:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:56.018 20:58:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:56.018 20:58:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:56.018 20:58:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.018 20:58:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.018 20:58:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.018 20:58:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:56.018 20:58:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:56.018 20:58:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.018 20:58:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.018 20:58:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.018 20:58:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:56.018 { 00:04:56.018 "name": "Malloc0", 00:04:56.018 "aliases": [ 00:04:56.018 "d4144485-8f08-46c1-8429-4f83b07e48dc" 00:04:56.018 ], 00:04:56.018 "product_name": "Malloc disk", 00:04:56.018 "block_size": 512, 00:04:56.018 "num_blocks": 16384, 00:04:56.018 "uuid": "d4144485-8f08-46c1-8429-4f83b07e48dc", 00:04:56.018 "assigned_rate_limits": { 00:04:56.018 "rw_ios_per_sec": 0, 00:04:56.018 "rw_mbytes_per_sec": 0, 00:04:56.018 "r_mbytes_per_sec": 0, 00:04:56.018 "w_mbytes_per_sec": 0 00:04:56.018 }, 00:04:56.018 "claimed": false, 00:04:56.018 "zoned": false, 00:04:56.018 "supported_io_types": { 00:04:56.018 "read": true, 00:04:56.018 "write": true, 00:04:56.018 "unmap": true, 00:04:56.018 "flush": true, 00:04:56.018 "reset": true, 00:04:56.018 "nvme_admin": false, 00:04:56.018 "nvme_io": false, 00:04:56.018 "nvme_io_md": false, 00:04:56.018 "write_zeroes": true, 00:04:56.018 "zcopy": true, 00:04:56.018 "get_zone_info": false, 00:04:56.018 "zone_management": false, 00:04:56.018 "zone_append": false, 00:04:56.018 "compare": false, 00:04:56.018 "compare_and_write": false, 00:04:56.018 "abort": true, 00:04:56.018 "seek_hole": false, 00:04:56.018 "seek_data": false, 00:04:56.018 "copy": true, 00:04:56.018 "nvme_iov_md": false 00:04:56.018 }, 00:04:56.018 "memory_domains": [ 00:04:56.018 { 00:04:56.018 "dma_device_id": "system", 00:04:56.018 "dma_device_type": 1 00:04:56.018 }, 00:04:56.018 { 00:04:56.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.018 "dma_device_type": 2 00:04:56.018 } 00:04:56.018 ], 00:04:56.018 "driver_specific": {} 00:04:56.018 } 00:04:56.018 ]' 00:04:56.018 20:58:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:56.018 20:58:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:56.018 20:58:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:56.018 20:58:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.018 20:58:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.018 [2024-12-05 20:58:04.033776] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:56.018 [2024-12-05 20:58:04.033805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:56.018 [2024-12-05 20:58:04.033817] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b1e100 00:04:56.018 [2024-12-05 20:58:04.033823] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:56.018 [2024-12-05 20:58:04.034909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:56.018 [2024-12-05 20:58:04.034929] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:56.018 Passthru0 00:04:56.018 20:58:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.018 20:58:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:56.018 20:58:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.018 20:58:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.018 20:58:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.018 20:58:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:56.018 { 00:04:56.018 "name": "Malloc0", 00:04:56.018 "aliases": [ 00:04:56.018 "d4144485-8f08-46c1-8429-4f83b07e48dc" 00:04:56.018 ], 00:04:56.018 "product_name": "Malloc disk", 00:04:56.018 "block_size": 512, 00:04:56.018 "num_blocks": 16384, 00:04:56.018 "uuid": "d4144485-8f08-46c1-8429-4f83b07e48dc", 00:04:56.018 "assigned_rate_limits": { 00:04:56.018 "rw_ios_per_sec": 0, 00:04:56.018 "rw_mbytes_per_sec": 0, 00:04:56.018 "r_mbytes_per_sec": 0, 00:04:56.018 "w_mbytes_per_sec": 0 00:04:56.018 }, 00:04:56.018 "claimed": true, 00:04:56.018 "claim_type": "exclusive_write", 00:04:56.018 "zoned": false, 00:04:56.018 "supported_io_types": { 00:04:56.018 "read": true, 00:04:56.018 "write": true, 00:04:56.018 "unmap": true, 00:04:56.018 "flush": true, 00:04:56.018 "reset": true, 00:04:56.018 "nvme_admin": false, 00:04:56.018 "nvme_io": false, 00:04:56.018 "nvme_io_md": false, 00:04:56.018 "write_zeroes": true, 00:04:56.018 "zcopy": true, 00:04:56.018 "get_zone_info": false, 00:04:56.018 "zone_management": false, 00:04:56.018 "zone_append": false, 00:04:56.018 "compare": false, 00:04:56.018 "compare_and_write": false, 00:04:56.018 "abort": true, 00:04:56.018 "seek_hole": false, 00:04:56.018 "seek_data": false, 00:04:56.018 "copy": true, 00:04:56.018 "nvme_iov_md": false 00:04:56.018 }, 00:04:56.018 "memory_domains": [ 00:04:56.018 { 00:04:56.018 "dma_device_id": "system", 00:04:56.018 "dma_device_type": 1 00:04:56.018 }, 00:04:56.018 { 00:04:56.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.018 "dma_device_type": 2 00:04:56.018 } 00:04:56.018 ], 00:04:56.018 "driver_specific": {} 00:04:56.018 }, 00:04:56.018 { 00:04:56.018 "name": "Passthru0", 00:04:56.018 "aliases": [ 00:04:56.018 "ea81ede9-f70b-5f10-b016-1839266e967a" 00:04:56.018 ], 00:04:56.018 "product_name": "passthru", 00:04:56.018 "block_size": 512, 00:04:56.018 "num_blocks": 16384, 00:04:56.018 "uuid": "ea81ede9-f70b-5f10-b016-1839266e967a", 00:04:56.018 "assigned_rate_limits": { 00:04:56.018 "rw_ios_per_sec": 0, 00:04:56.018 "rw_mbytes_per_sec": 0, 00:04:56.018 "r_mbytes_per_sec": 0, 00:04:56.018 "w_mbytes_per_sec": 0 00:04:56.018 }, 00:04:56.018 "claimed": false, 00:04:56.018 "zoned": false, 00:04:56.018 "supported_io_types": { 00:04:56.018 "read": true, 00:04:56.018 "write": true, 00:04:56.018 "unmap": true, 00:04:56.018 "flush": true, 00:04:56.018 "reset": true, 00:04:56.018 "nvme_admin": false, 00:04:56.018 "nvme_io": false, 00:04:56.018 "nvme_io_md": false, 00:04:56.018 "write_zeroes": true, 00:04:56.018 "zcopy": true, 00:04:56.018 "get_zone_info": false, 00:04:56.018 "zone_management": false, 00:04:56.018 "zone_append": false, 00:04:56.018 "compare": false, 00:04:56.018 "compare_and_write": false, 00:04:56.018 "abort": true, 00:04:56.018 "seek_hole": false, 00:04:56.018 "seek_data": false, 00:04:56.018 "copy": true, 00:04:56.018 "nvme_iov_md": false 00:04:56.018 }, 00:04:56.018 "memory_domains": [ 00:04:56.018 { 00:04:56.018 "dma_device_id": "system", 00:04:56.018 "dma_device_type": 1 00:04:56.018 }, 00:04:56.018 { 00:04:56.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.018 "dma_device_type": 2 00:04:56.018 } 00:04:56.018 ], 00:04:56.018 "driver_specific": { 00:04:56.018 "passthru": { 00:04:56.018 "name": "Passthru0", 00:04:56.018 "base_bdev_name": "Malloc0" 00:04:56.018 } 00:04:56.018 } 00:04:56.018 } 00:04:56.018 ]' 00:04:56.018 20:58:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:56.018 20:58:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:56.018 20:58:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:56.018 20:58:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.018 20:58:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.018 20:58:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.018 20:58:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:56.018 20:58:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.018 20:58:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.277 20:58:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.277 20:58:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:56.277 20:58:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.277 20:58:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.277 20:58:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.277 20:58:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:56.277 20:58:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:56.277 20:58:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:56.277 00:04:56.277 real 0m0.277s 00:04:56.277 user 0m0.173s 00:04:56.277 sys 0m0.039s 00:04:56.277 20:58:04 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.277 20:58:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.277 ************************************ 00:04:56.277 END TEST rpc_integrity 00:04:56.277 ************************************ 00:04:56.277 20:58:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:56.277 20:58:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.277 20:58:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.277 20:58:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.277 ************************************ 00:04:56.277 START TEST rpc_plugins 00:04:56.277 ************************************ 00:04:56.277 20:58:04 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:56.277 20:58:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:56.277 20:58:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.277 20:58:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.277 20:58:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.277 20:58:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:56.277 20:58:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:56.277 20:58:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.277 20:58:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.277 20:58:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.277 20:58:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:56.277 { 00:04:56.277 "name": "Malloc1", 00:04:56.277 "aliases": [ 00:04:56.277 "513a9ed2-823e-41f5-9296-79f9e35fee32" 00:04:56.277 ], 00:04:56.277 "product_name": "Malloc disk", 00:04:56.277 "block_size": 4096, 00:04:56.277 "num_blocks": 256, 00:04:56.277 "uuid": "513a9ed2-823e-41f5-9296-79f9e35fee32", 00:04:56.277 "assigned_rate_limits": { 00:04:56.277 "rw_ios_per_sec": 0, 00:04:56.277 "rw_mbytes_per_sec": 0, 00:04:56.277 "r_mbytes_per_sec": 0, 00:04:56.277 "w_mbytes_per_sec": 0 00:04:56.277 }, 00:04:56.277 "claimed": false, 00:04:56.277 "zoned": false, 00:04:56.277 "supported_io_types": { 00:04:56.277 "read": true, 00:04:56.277 "write": true, 00:04:56.277 "unmap": true, 00:04:56.277 "flush": true, 00:04:56.277 "reset": true, 00:04:56.277 "nvme_admin": false, 00:04:56.277 "nvme_io": false, 00:04:56.277 "nvme_io_md": false, 00:04:56.277 "write_zeroes": true, 00:04:56.277 "zcopy": true, 00:04:56.277 "get_zone_info": false, 00:04:56.277 "zone_management": false, 00:04:56.277 "zone_append": false, 00:04:56.277 "compare": false, 00:04:56.277 "compare_and_write": false, 00:04:56.277 "abort": true, 00:04:56.277 "seek_hole": false, 00:04:56.277 "seek_data": false, 00:04:56.277 "copy": true, 00:04:56.277 "nvme_iov_md": false 00:04:56.277 }, 00:04:56.277 "memory_domains": [ 00:04:56.277 { 00:04:56.277 "dma_device_id": "system", 00:04:56.277 "dma_device_type": 1 00:04:56.277 }, 00:04:56.277 { 00:04:56.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.277 "dma_device_type": 2 00:04:56.277 } 00:04:56.277 ], 00:04:56.277 "driver_specific": {} 00:04:56.277 } 00:04:56.277 ]' 00:04:56.277 20:58:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:56.277 20:58:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:56.277 20:58:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:56.277 20:58:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.277 20:58:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.277 20:58:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.277 20:58:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:56.277 20:58:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.277 20:58:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.277 20:58:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.277 20:58:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:56.277 20:58:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:56.535 20:58:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:56.535 00:04:56.535 real 0m0.147s 00:04:56.535 user 0m0.091s 00:04:56.535 sys 0m0.018s 00:04:56.535 20:58:04 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.535 20:58:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.535 ************************************ 00:04:56.535 END TEST rpc_plugins 00:04:56.535 ************************************ 00:04:56.535 20:58:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:56.535 20:58:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.535 20:58:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.535 20:58:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.535 ************************************ 00:04:56.535 START TEST rpc_trace_cmd_test 00:04:56.535 ************************************ 00:04:56.535 20:58:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:56.535 20:58:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:56.535 20:58:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:56.535 20:58:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.535 20:58:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:56.535 20:58:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.535 20:58:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:56.535 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1113287", 00:04:56.535 "tpoint_group_mask": "0x8", 00:04:56.535 "iscsi_conn": { 00:04:56.535 "mask": "0x2", 00:04:56.535 "tpoint_mask": "0x0" 00:04:56.535 }, 00:04:56.535 "scsi": { 00:04:56.535 "mask": "0x4", 00:04:56.535 "tpoint_mask": "0x0" 00:04:56.535 }, 00:04:56.535 "bdev": { 00:04:56.535 "mask": "0x8", 00:04:56.535 "tpoint_mask": "0xffffffffffffffff" 00:04:56.535 }, 00:04:56.535 "nvmf_rdma": { 00:04:56.535 "mask": "0x10", 00:04:56.535 "tpoint_mask": "0x0" 00:04:56.535 }, 00:04:56.535 "nvmf_tcp": { 00:04:56.535 "mask": "0x20", 00:04:56.535 "tpoint_mask": "0x0" 00:04:56.535 }, 00:04:56.535 "ftl": { 00:04:56.535 "mask": "0x40", 00:04:56.535 "tpoint_mask": "0x0" 00:04:56.535 }, 00:04:56.535 "blobfs": { 00:04:56.535 "mask": "0x80", 00:04:56.535 "tpoint_mask": "0x0" 00:04:56.535 }, 00:04:56.535 "dsa": { 00:04:56.535 "mask": "0x200", 00:04:56.535 "tpoint_mask": "0x0" 00:04:56.535 }, 00:04:56.535 "thread": { 00:04:56.535 "mask": "0x400", 00:04:56.535 "tpoint_mask": "0x0" 00:04:56.535 }, 00:04:56.535 "nvme_pcie": { 00:04:56.535 "mask": "0x800", 00:04:56.535 "tpoint_mask": "0x0" 00:04:56.535 }, 00:04:56.535 "iaa": { 00:04:56.535 "mask": "0x1000", 00:04:56.535 "tpoint_mask": "0x0" 00:04:56.535 }, 00:04:56.535 "nvme_tcp": { 00:04:56.535 "mask": "0x2000", 00:04:56.535 "tpoint_mask": "0x0" 00:04:56.535 }, 00:04:56.535 "bdev_nvme": { 00:04:56.536 "mask": "0x4000", 00:04:56.536 "tpoint_mask": "0x0" 00:04:56.536 }, 00:04:56.536 "sock": { 00:04:56.536 "mask": "0x8000", 00:04:56.536 "tpoint_mask": "0x0" 00:04:56.536 }, 00:04:56.536 "blob": { 00:04:56.536 "mask": "0x10000", 00:04:56.536 "tpoint_mask": "0x0" 00:04:56.536 }, 00:04:56.536 "bdev_raid": { 00:04:56.536 "mask": "0x20000", 00:04:56.536 "tpoint_mask": "0x0" 00:04:56.536 }, 00:04:56.536 "scheduler": { 00:04:56.536 "mask": "0x40000", 00:04:56.536 "tpoint_mask": "0x0" 00:04:56.536 } 00:04:56.536 }' 00:04:56.536 20:58:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:56.536 20:58:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:56.536 20:58:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:56.536 20:58:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:56.536 20:58:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:56.536 20:58:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:56.536 20:58:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:56.860 20:58:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:56.860 20:58:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:56.860 20:58:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:56.860 00:04:56.860 real 0m0.230s 00:04:56.860 user 0m0.194s 00:04:56.860 sys 0m0.028s 00:04:56.860 20:58:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.860 20:58:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:56.860 ************************************ 00:04:56.860 END TEST rpc_trace_cmd_test 00:04:56.860 ************************************ 00:04:56.860 20:58:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:56.860 20:58:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:56.860 20:58:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:56.860 20:58:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.860 20:58:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.860 20:58:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.860 ************************************ 00:04:56.860 START TEST rpc_daemon_integrity 00:04:56.860 ************************************ 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:56.860 { 00:04:56.860 "name": "Malloc2", 00:04:56.860 "aliases": [ 00:04:56.860 "913f5146-746b-4a1c-93a8-cfb99d332313" 00:04:56.860 ], 00:04:56.860 "product_name": "Malloc disk", 00:04:56.860 "block_size": 512, 00:04:56.860 "num_blocks": 16384, 00:04:56.860 "uuid": "913f5146-746b-4a1c-93a8-cfb99d332313", 00:04:56.860 "assigned_rate_limits": { 00:04:56.860 "rw_ios_per_sec": 0, 00:04:56.860 "rw_mbytes_per_sec": 0, 00:04:56.860 "r_mbytes_per_sec": 0, 00:04:56.860 "w_mbytes_per_sec": 0 00:04:56.860 }, 00:04:56.860 "claimed": false, 00:04:56.860 "zoned": false, 00:04:56.860 "supported_io_types": { 00:04:56.860 "read": true, 00:04:56.860 "write": true, 00:04:56.860 "unmap": true, 00:04:56.860 "flush": true, 00:04:56.860 "reset": true, 00:04:56.860 "nvme_admin": false, 00:04:56.860 "nvme_io": false, 00:04:56.860 "nvme_io_md": false, 00:04:56.860 "write_zeroes": true, 00:04:56.860 "zcopy": true, 00:04:56.860 "get_zone_info": false, 00:04:56.860 "zone_management": false, 00:04:56.860 "zone_append": false, 00:04:56.860 "compare": false, 00:04:56.860 "compare_and_write": false, 00:04:56.860 "abort": true, 00:04:56.860 "seek_hole": false, 00:04:56.860 "seek_data": false, 00:04:56.860 "copy": true, 00:04:56.860 "nvme_iov_md": false 00:04:56.860 }, 00:04:56.860 "memory_domains": [ 00:04:56.860 { 00:04:56.860 "dma_device_id": "system", 00:04:56.860 "dma_device_type": 1 00:04:56.860 }, 00:04:56.860 { 00:04:56.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.860 "dma_device_type": 2 00:04:56.860 } 00:04:56.860 ], 00:04:56.860 "driver_specific": {} 00:04:56.860 } 00:04:56.860 ]' 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.860 [2024-12-05 20:58:04.884080] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:56.860 [2024-12-05 20:58:04.884108] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:56.860 [2024-12-05 20:58:04.884120] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19dc450 00:04:56.860 [2024-12-05 20:58:04.884127] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:56.860 [2024-12-05 20:58:04.885098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:56.860 [2024-12-05 20:58:04.885118] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:56.860 Passthru0 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.860 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:56.860 { 00:04:56.860 "name": "Malloc2", 00:04:56.860 "aliases": [ 00:04:56.860 "913f5146-746b-4a1c-93a8-cfb99d332313" 00:04:56.860 ], 00:04:56.860 "product_name": "Malloc disk", 00:04:56.860 "block_size": 512, 00:04:56.860 "num_blocks": 16384, 00:04:56.860 "uuid": "913f5146-746b-4a1c-93a8-cfb99d332313", 00:04:56.860 "assigned_rate_limits": { 00:04:56.860 "rw_ios_per_sec": 0, 00:04:56.860 "rw_mbytes_per_sec": 0, 00:04:56.860 "r_mbytes_per_sec": 0, 00:04:56.860 "w_mbytes_per_sec": 0 00:04:56.861 }, 00:04:56.861 "claimed": true, 00:04:56.861 "claim_type": "exclusive_write", 00:04:56.861 "zoned": false, 00:04:56.861 "supported_io_types": { 00:04:56.861 "read": true, 00:04:56.861 "write": true, 00:04:56.861 "unmap": true, 00:04:56.861 "flush": true, 00:04:56.861 "reset": true, 00:04:56.861 "nvme_admin": false, 00:04:56.861 "nvme_io": false, 00:04:56.861 "nvme_io_md": false, 00:04:56.861 "write_zeroes": true, 00:04:56.861 "zcopy": true, 00:04:56.861 "get_zone_info": false, 00:04:56.861 "zone_management": false, 00:04:56.861 "zone_append": false, 00:04:56.861 "compare": false, 00:04:56.861 "compare_and_write": false, 00:04:56.861 "abort": true, 00:04:56.861 "seek_hole": false, 00:04:56.861 "seek_data": false, 00:04:56.861 "copy": true, 00:04:56.861 "nvme_iov_md": false 00:04:56.861 }, 00:04:56.861 "memory_domains": [ 00:04:56.861 { 00:04:56.861 "dma_device_id": "system", 00:04:56.861 "dma_device_type": 1 00:04:56.861 }, 00:04:56.861 { 00:04:56.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.861 "dma_device_type": 2 00:04:56.861 } 00:04:56.861 ], 00:04:56.861 "driver_specific": {} 00:04:56.861 }, 00:04:56.861 { 00:04:56.861 "name": "Passthru0", 00:04:56.861 "aliases": [ 00:04:56.861 "55921e03-a990-5524-bd3c-3c51bd5d04f2" 00:04:56.861 ], 00:04:56.861 "product_name": "passthru", 00:04:56.861 "block_size": 512, 00:04:56.861 "num_blocks": 16384, 00:04:56.861 "uuid": "55921e03-a990-5524-bd3c-3c51bd5d04f2", 00:04:56.861 "assigned_rate_limits": { 00:04:56.861 "rw_ios_per_sec": 0, 00:04:56.861 "rw_mbytes_per_sec": 0, 00:04:56.861 "r_mbytes_per_sec": 0, 00:04:56.861 "w_mbytes_per_sec": 0 00:04:56.861 }, 00:04:56.861 "claimed": false, 00:04:56.861 "zoned": false, 00:04:56.861 "supported_io_types": { 00:04:56.861 "read": true, 00:04:56.861 "write": true, 00:04:56.861 "unmap": true, 00:04:56.861 "flush": true, 00:04:56.861 "reset": true, 00:04:56.861 "nvme_admin": false, 00:04:56.861 "nvme_io": false, 00:04:56.861 "nvme_io_md": false, 00:04:56.861 "write_zeroes": true, 00:04:56.861 "zcopy": true, 00:04:56.861 "get_zone_info": false, 00:04:56.861 "zone_management": false, 00:04:56.861 "zone_append": false, 00:04:56.861 "compare": false, 00:04:56.861 "compare_and_write": false, 00:04:56.861 "abort": true, 00:04:56.861 "seek_hole": false, 00:04:56.861 "seek_data": false, 00:04:56.861 "copy": true, 00:04:56.861 "nvme_iov_md": false 00:04:56.861 }, 00:04:56.861 "memory_domains": [ 00:04:56.861 { 00:04:56.861 "dma_device_id": "system", 00:04:56.861 "dma_device_type": 1 00:04:56.861 }, 00:04:56.861 { 00:04:56.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.861 "dma_device_type": 2 00:04:56.861 } 00:04:56.861 ], 00:04:56.861 "driver_specific": { 00:04:56.861 "passthru": { 00:04:56.861 "name": "Passthru0", 00:04:56.861 "base_bdev_name": "Malloc2" 00:04:56.861 } 00:04:56.861 } 00:04:56.861 } 00:04:56.861 ]' 00:04:56.861 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:57.154 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:57.154 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:57.154 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.154 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.154 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.154 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:57.154 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.154 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.154 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.154 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:57.154 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.154 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.154 20:58:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.154 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:57.154 20:58:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:57.154 20:58:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:57.154 00:04:57.154 real 0m0.274s 00:04:57.154 user 0m0.165s 00:04:57.154 sys 0m0.044s 00:04:57.154 20:58:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.154 20:58:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.154 ************************************ 00:04:57.154 END TEST rpc_daemon_integrity 00:04:57.155 ************************************ 00:04:57.155 20:58:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:57.155 20:58:05 rpc -- rpc/rpc.sh@84 -- # killprocess 1113287 00:04:57.155 20:58:05 rpc -- common/autotest_common.sh@954 -- # '[' -z 1113287 ']' 00:04:57.155 20:58:05 rpc -- common/autotest_common.sh@958 -- # kill -0 1113287 00:04:57.155 20:58:05 rpc -- common/autotest_common.sh@959 -- # uname 00:04:57.155 20:58:05 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.155 20:58:05 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1113287 00:04:57.155 20:58:05 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.155 20:58:05 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.155 20:58:05 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1113287' 00:04:57.155 killing process with pid 1113287 00:04:57.155 20:58:05 rpc -- common/autotest_common.sh@973 -- # kill 1113287 00:04:57.155 20:58:05 rpc -- common/autotest_common.sh@978 -- # wait 1113287 00:04:57.413 00:04:57.413 real 0m2.112s 00:04:57.413 user 0m2.700s 00:04:57.413 sys 0m0.695s 00:04:57.413 20:58:05 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.413 20:58:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.413 ************************************ 00:04:57.413 END TEST rpc 00:04:57.413 ************************************ 00:04:57.413 20:58:05 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:57.413 20:58:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.413 20:58:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.413 20:58:05 -- common/autotest_common.sh@10 -- # set +x 00:04:57.413 ************************************ 00:04:57.413 START TEST skip_rpc 00:04:57.413 ************************************ 00:04:57.413 20:58:05 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:57.671 * Looking for test storage... 00:04:57.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:57.671 20:58:05 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:57.671 20:58:05 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:57.671 20:58:05 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:57.671 20:58:05 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.671 20:58:05 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:57.671 20:58:05 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.671 20:58:05 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:57.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.671 --rc genhtml_branch_coverage=1 00:04:57.671 --rc genhtml_function_coverage=1 00:04:57.671 --rc genhtml_legend=1 00:04:57.671 --rc geninfo_all_blocks=1 00:04:57.671 --rc geninfo_unexecuted_blocks=1 00:04:57.671 00:04:57.671 ' 00:04:57.671 20:58:05 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:57.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.671 --rc genhtml_branch_coverage=1 00:04:57.671 --rc genhtml_function_coverage=1 00:04:57.671 --rc genhtml_legend=1 00:04:57.671 --rc geninfo_all_blocks=1 00:04:57.671 --rc geninfo_unexecuted_blocks=1 00:04:57.671 00:04:57.671 ' 00:04:57.671 20:58:05 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:57.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.671 --rc genhtml_branch_coverage=1 00:04:57.671 --rc genhtml_function_coverage=1 00:04:57.671 --rc genhtml_legend=1 00:04:57.671 --rc geninfo_all_blocks=1 00:04:57.671 --rc geninfo_unexecuted_blocks=1 00:04:57.671 00:04:57.671 ' 00:04:57.671 20:58:05 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:57.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.671 --rc genhtml_branch_coverage=1 00:04:57.671 --rc genhtml_function_coverage=1 00:04:57.671 --rc genhtml_legend=1 00:04:57.671 --rc geninfo_all_blocks=1 00:04:57.671 --rc geninfo_unexecuted_blocks=1 00:04:57.671 00:04:57.671 ' 00:04:57.671 20:58:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:57.671 20:58:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:57.671 20:58:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:57.671 20:58:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.671 20:58:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.671 20:58:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.671 ************************************ 00:04:57.671 START TEST skip_rpc 00:04:57.671 ************************************ 00:04:57.671 20:58:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:57.671 20:58:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1113804 00:04:57.671 20:58:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.671 20:58:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:57.671 20:58:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:57.671 [2024-12-05 20:58:05.749096] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:04:57.671 [2024-12-05 20:58:05.749137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1113804 ] 00:04:57.928 [2024-12-05 20:58:05.821453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.928 [2024-12-05 20:58:05.861972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1113804 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1113804 ']' 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1113804 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1113804 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1113804' 00:05:03.229 killing process with pid 1113804 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1113804 00:05:03.229 20:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1113804 00:05:03.229 00:05:03.229 real 0m5.366s 00:05:03.229 user 0m5.121s 00:05:03.229 sys 0m0.280s 00:05:03.229 20:58:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.229 20:58:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.229 ************************************ 00:05:03.229 END TEST skip_rpc 00:05:03.229 ************************************ 00:05:03.229 20:58:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:03.229 20:58:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.229 20:58:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.229 20:58:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.229 ************************************ 00:05:03.229 START TEST skip_rpc_with_json 00:05:03.229 ************************************ 00:05:03.229 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:03.229 20:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:03.229 20:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1114709 00:05:03.229 20:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.229 20:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.229 20:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1114709 00:05:03.229 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1114709 ']' 00:05:03.229 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.229 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.229 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.229 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.229 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.229 [2024-12-05 20:58:11.191971] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:05:03.229 [2024-12-05 20:58:11.192015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1114709 ] 00:05:03.229 [2024-12-05 20:58:11.269856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.229 [2024-12-05 20:58:11.312792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.489 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.489 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:03.489 20:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:03.489 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.489 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.489 [2024-12-05 20:58:11.540045] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:03.489 request: 00:05:03.489 { 00:05:03.489 "trtype": "tcp", 00:05:03.489 "method": "nvmf_get_transports", 00:05:03.489 "req_id": 1 00:05:03.489 } 00:05:03.489 Got JSON-RPC error response 00:05:03.489 response: 00:05:03.489 { 00:05:03.489 "code": -19, 00:05:03.489 "message": "No such device" 00:05:03.489 } 00:05:03.489 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:03.489 20:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:03.489 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.489 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.489 [2024-12-05 20:58:11.552160] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:03.489 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.489 20:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:03.489 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.489 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.748 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.748 20:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:03.748 { 00:05:03.748 "subsystems": [ 00:05:03.748 { 00:05:03.748 "subsystem": "fsdev", 00:05:03.748 "config": [ 00:05:03.748 { 00:05:03.748 "method": "fsdev_set_opts", 00:05:03.748 "params": { 00:05:03.748 "fsdev_io_pool_size": 65535, 00:05:03.748 "fsdev_io_cache_size": 256 00:05:03.748 } 00:05:03.748 } 00:05:03.748 ] 00:05:03.748 }, 00:05:03.748 { 00:05:03.748 "subsystem": "vfio_user_target", 00:05:03.748 "config": null 00:05:03.748 }, 00:05:03.748 { 00:05:03.748 "subsystem": "keyring", 00:05:03.748 "config": [] 00:05:03.748 }, 00:05:03.748 { 00:05:03.748 "subsystem": "iobuf", 00:05:03.748 "config": [ 00:05:03.748 { 00:05:03.749 "method": "iobuf_set_options", 00:05:03.749 "params": { 00:05:03.749 "small_pool_count": 8192, 00:05:03.749 "large_pool_count": 1024, 00:05:03.749 "small_bufsize": 8192, 00:05:03.749 "large_bufsize": 135168, 00:05:03.749 "enable_numa": false 00:05:03.749 } 00:05:03.749 } 00:05:03.749 ] 00:05:03.749 }, 00:05:03.749 { 00:05:03.749 "subsystem": "sock", 00:05:03.749 "config": [ 00:05:03.749 { 00:05:03.749 "method": "sock_set_default_impl", 00:05:03.749 "params": { 00:05:03.749 "impl_name": "posix" 00:05:03.749 } 00:05:03.749 }, 00:05:03.749 { 00:05:03.749 "method": "sock_impl_set_options", 00:05:03.749 "params": { 00:05:03.749 "impl_name": "ssl", 00:05:03.749 "recv_buf_size": 4096, 00:05:03.749 "send_buf_size": 4096, 00:05:03.749 "enable_recv_pipe": true, 00:05:03.749 "enable_quickack": false, 00:05:03.749 "enable_placement_id": 0, 00:05:03.749 "enable_zerocopy_send_server": true, 00:05:03.749 "enable_zerocopy_send_client": false, 00:05:03.749 "zerocopy_threshold": 0, 00:05:03.749 "tls_version": 0, 00:05:03.749 "enable_ktls": false 00:05:03.749 } 00:05:03.749 }, 00:05:03.749 { 00:05:03.749 "method": "sock_impl_set_options", 00:05:03.749 "params": { 00:05:03.749 "impl_name": "posix", 00:05:03.749 "recv_buf_size": 2097152, 00:05:03.749 "send_buf_size": 2097152, 00:05:03.749 "enable_recv_pipe": true, 00:05:03.749 "enable_quickack": false, 00:05:03.749 "enable_placement_id": 0, 00:05:03.749 "enable_zerocopy_send_server": true, 00:05:03.749 "enable_zerocopy_send_client": false, 00:05:03.749 "zerocopy_threshold": 0, 00:05:03.749 "tls_version": 0, 00:05:03.749 "enable_ktls": false 00:05:03.749 } 00:05:03.749 } 00:05:03.749 ] 00:05:03.749 }, 00:05:03.749 { 00:05:03.749 "subsystem": "vmd", 00:05:03.749 "config": [] 00:05:03.749 }, 00:05:03.749 { 00:05:03.749 "subsystem": "accel", 00:05:03.749 "config": [ 00:05:03.749 { 00:05:03.749 "method": "accel_set_options", 00:05:03.749 "params": { 00:05:03.749 "small_cache_size": 128, 00:05:03.749 "large_cache_size": 16, 00:05:03.749 "task_count": 2048, 00:05:03.749 "sequence_count": 2048, 00:05:03.749 "buf_count": 2048 00:05:03.749 } 00:05:03.749 } 00:05:03.749 ] 00:05:03.749 }, 00:05:03.749 { 00:05:03.749 "subsystem": "bdev", 00:05:03.749 "config": [ 00:05:03.749 { 00:05:03.749 "method": "bdev_set_options", 00:05:03.749 "params": { 00:05:03.749 "bdev_io_pool_size": 65535, 00:05:03.749 "bdev_io_cache_size": 256, 00:05:03.749 "bdev_auto_examine": true, 00:05:03.749 "iobuf_small_cache_size": 128, 00:05:03.749 "iobuf_large_cache_size": 16 00:05:03.749 } 00:05:03.749 }, 00:05:03.749 { 00:05:03.749 "method": "bdev_raid_set_options", 00:05:03.749 "params": { 00:05:03.749 "process_window_size_kb": 1024, 00:05:03.749 "process_max_bandwidth_mb_sec": 0 00:05:03.749 } 00:05:03.749 }, 00:05:03.749 { 00:05:03.749 "method": "bdev_iscsi_set_options", 00:05:03.749 "params": { 00:05:03.749 "timeout_sec": 30 00:05:03.749 } 00:05:03.749 }, 00:05:03.749 { 00:05:03.749 "method": "bdev_nvme_set_options", 00:05:03.749 "params": { 00:05:03.749 "action_on_timeout": "none", 00:05:03.749 "timeout_us": 0, 00:05:03.749 "timeout_admin_us": 0, 00:05:03.749 "keep_alive_timeout_ms": 10000, 00:05:03.749 "arbitration_burst": 0, 00:05:03.749 "low_priority_weight": 0, 00:05:03.749 "medium_priority_weight": 0, 00:05:03.749 "high_priority_weight": 0, 00:05:03.749 "nvme_adminq_poll_period_us": 10000, 00:05:03.749 "nvme_ioq_poll_period_us": 0, 00:05:03.749 "io_queue_requests": 0, 00:05:03.749 "delay_cmd_submit": true, 00:05:03.749 "transport_retry_count": 4, 00:05:03.749 "bdev_retry_count": 3, 00:05:03.749 "transport_ack_timeout": 0, 00:05:03.749 "ctrlr_loss_timeout_sec": 0, 00:05:03.749 "reconnect_delay_sec": 0, 00:05:03.749 "fast_io_fail_timeout_sec": 0, 00:05:03.749 "disable_auto_failback": false, 00:05:03.749 "generate_uuids": false, 00:05:03.749 "transport_tos": 0, 00:05:03.749 "nvme_error_stat": false, 00:05:03.749 "rdma_srq_size": 0, 00:05:03.749 "io_path_stat": false, 00:05:03.749 "allow_accel_sequence": false, 00:05:03.749 "rdma_max_cq_size": 0, 00:05:03.749 "rdma_cm_event_timeout_ms": 0, 00:05:03.749 "dhchap_digests": [ 00:05:03.749 "sha256", 00:05:03.749 "sha384", 00:05:03.749 "sha512" 00:05:03.749 ], 00:05:03.749 "dhchap_dhgroups": [ 00:05:03.749 "null", 00:05:03.749 "ffdhe2048", 00:05:03.749 "ffdhe3072", 00:05:03.749 "ffdhe4096", 00:05:03.749 "ffdhe6144", 00:05:03.749 "ffdhe8192" 00:05:03.749 ] 00:05:03.749 } 00:05:03.749 }, 00:05:03.749 { 00:05:03.749 "method": "bdev_nvme_set_hotplug", 00:05:03.749 "params": { 00:05:03.749 "period_us": 100000, 00:05:03.749 "enable": false 00:05:03.749 } 00:05:03.749 }, 00:05:03.749 { 00:05:03.749 "method": "bdev_wait_for_examine" 00:05:03.749 } 00:05:03.749 ] 00:05:03.749 }, 00:05:03.749 { 00:05:03.749 "subsystem": "scsi", 00:05:03.749 "config": null 00:05:03.749 }, 00:05:03.749 { 00:05:03.749 "subsystem": "scheduler", 00:05:03.749 "config": [ 00:05:03.749 { 00:05:03.749 "method": "framework_set_scheduler", 00:05:03.749 "params": { 00:05:03.749 "name": "static" 00:05:03.749 } 00:05:03.749 } 00:05:03.749 ] 00:05:03.749 }, 00:05:03.749 { 00:05:03.749 "subsystem": "vhost_scsi", 00:05:03.749 "config": [] 00:05:03.749 }, 00:05:03.749 { 00:05:03.749 "subsystem": "vhost_blk", 00:05:03.749 "config": [] 00:05:03.749 }, 00:05:03.749 { 00:05:03.749 "subsystem": "ublk", 00:05:03.749 "config": [] 00:05:03.749 }, 00:05:03.749 { 00:05:03.749 "subsystem": "nbd", 00:05:03.749 "config": [] 00:05:03.749 }, 00:05:03.749 { 00:05:03.749 "subsystem": "nvmf", 00:05:03.749 "config": [ 00:05:03.749 { 00:05:03.749 "method": "nvmf_set_config", 00:05:03.749 "params": { 00:05:03.750 "discovery_filter": "match_any", 00:05:03.750 "admin_cmd_passthru": { 00:05:03.750 "identify_ctrlr": false 00:05:03.750 }, 00:05:03.750 "dhchap_digests": [ 00:05:03.750 "sha256", 00:05:03.750 "sha384", 00:05:03.750 "sha512" 00:05:03.750 ], 00:05:03.750 "dhchap_dhgroups": [ 00:05:03.750 "null", 00:05:03.750 "ffdhe2048", 00:05:03.750 "ffdhe3072", 00:05:03.750 "ffdhe4096", 00:05:03.750 "ffdhe6144", 00:05:03.750 "ffdhe8192" 00:05:03.750 ] 00:05:03.750 } 00:05:03.750 }, 00:05:03.750 { 00:05:03.750 "method": "nvmf_set_max_subsystems", 00:05:03.750 "params": { 00:05:03.750 "max_subsystems": 1024 00:05:03.750 } 00:05:03.750 }, 00:05:03.750 { 00:05:03.750 "method": "nvmf_set_crdt", 00:05:03.750 "params": { 00:05:03.750 "crdt1": 0, 00:05:03.750 "crdt2": 0, 00:05:03.750 "crdt3": 0 00:05:03.750 } 00:05:03.750 }, 00:05:03.750 { 00:05:03.750 "method": "nvmf_create_transport", 00:05:03.750 "params": { 00:05:03.750 "trtype": "TCP", 00:05:03.750 "max_queue_depth": 128, 00:05:03.750 "max_io_qpairs_per_ctrlr": 127, 00:05:03.750 "in_capsule_data_size": 4096, 00:05:03.750 "max_io_size": 131072, 00:05:03.750 "io_unit_size": 131072, 00:05:03.750 "max_aq_depth": 128, 00:05:03.750 "num_shared_buffers": 511, 00:05:03.750 "buf_cache_size": 4294967295, 00:05:03.750 "dif_insert_or_strip": false, 00:05:03.750 "zcopy": false, 00:05:03.750 "c2h_success": true, 00:05:03.750 "sock_priority": 0, 00:05:03.750 "abort_timeout_sec": 1, 00:05:03.750 "ack_timeout": 0, 00:05:03.750 "data_wr_pool_size": 0 00:05:03.750 } 00:05:03.750 } 00:05:03.750 ] 00:05:03.750 }, 00:05:03.750 { 00:05:03.750 "subsystem": "iscsi", 00:05:03.750 "config": [ 00:05:03.750 { 00:05:03.750 "method": "iscsi_set_options", 00:05:03.750 "params": { 00:05:03.750 "node_base": "iqn.2016-06.io.spdk", 00:05:03.750 "max_sessions": 128, 00:05:03.750 "max_connections_per_session": 2, 00:05:03.750 "max_queue_depth": 64, 00:05:03.750 "default_time2wait": 2, 00:05:03.750 "default_time2retain": 20, 00:05:03.750 "first_burst_length": 8192, 00:05:03.750 "immediate_data": true, 00:05:03.750 "allow_duplicated_isid": false, 00:05:03.750 "error_recovery_level": 0, 00:05:03.750 "nop_timeout": 60, 00:05:03.750 "nop_in_interval": 30, 00:05:03.750 "disable_chap": false, 00:05:03.750 "require_chap": false, 00:05:03.750 "mutual_chap": false, 00:05:03.750 "chap_group": 0, 00:05:03.750 "max_large_datain_per_connection": 64, 00:05:03.750 "max_r2t_per_connection": 4, 00:05:03.750 "pdu_pool_size": 36864, 00:05:03.750 "immediate_data_pool_size": 16384, 00:05:03.750 "data_out_pool_size": 2048 00:05:03.750 } 00:05:03.750 } 00:05:03.750 ] 00:05:03.750 } 00:05:03.750 ] 00:05:03.750 } 00:05:03.750 20:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:03.750 20:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1114709 00:05:03.750 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1114709 ']' 00:05:03.750 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1114709 00:05:03.750 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:03.750 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.750 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1114709 00:05:03.750 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.750 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.750 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1114709' 00:05:03.750 killing process with pid 1114709 00:05:03.750 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1114709 00:05:03.750 20:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1114709 00:05:04.009 20:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1114900 00:05:04.009 20:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:04.009 20:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:09.279 20:58:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1114900 00:05:09.279 20:58:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1114900 ']' 00:05:09.279 20:58:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1114900 00:05:09.279 20:58:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:09.279 20:58:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.279 20:58:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1114900 00:05:09.279 20:58:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.279 20:58:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.279 20:58:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1114900' 00:05:09.279 killing process with pid 1114900 00:05:09.279 20:58:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1114900 00:05:09.279 20:58:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1114900 00:05:09.538 20:58:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:09.538 20:58:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:09.538 00:05:09.538 real 0m6.289s 00:05:09.538 user 0m6.002s 00:05:09.538 sys 0m0.588s 00:05:09.538 20:58:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.538 20:58:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.538 ************************************ 00:05:09.538 END TEST skip_rpc_with_json 00:05:09.538 ************************************ 00:05:09.538 20:58:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:09.538 20:58:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.538 20:58:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.538 20:58:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.538 ************************************ 00:05:09.538 START TEST skip_rpc_with_delay 00:05:09.538 ************************************ 00:05:09.538 20:58:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:09.538 20:58:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:09.538 20:58:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:09.539 20:58:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:09.539 20:58:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.539 20:58:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.539 20:58:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.539 20:58:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.539 20:58:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.539 20:58:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.539 20:58:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.539 20:58:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:09.539 20:58:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:09.539 [2024-12-05 20:58:17.550414] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:09.539 20:58:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:09.539 20:58:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:09.539 20:58:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:09.539 20:58:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:09.539 00:05:09.539 real 0m0.067s 00:05:09.539 user 0m0.047s 00:05:09.539 sys 0m0.020s 00:05:09.539 20:58:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.539 20:58:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:09.539 ************************************ 00:05:09.539 END TEST skip_rpc_with_delay 00:05:09.539 ************************************ 00:05:09.539 20:58:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:09.539 20:58:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:09.539 20:58:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:09.539 20:58:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.539 20:58:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.539 20:58:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.539 ************************************ 00:05:09.539 START TEST exit_on_failed_rpc_init 00:05:09.539 ************************************ 00:05:09.539 20:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:09.539 20:58:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1115871 00:05:09.539 20:58:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1115871 00:05:09.539 20:58:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.539 20:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1115871 ']' 00:05:09.539 20:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.539 20:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.539 20:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.539 20:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.539 20:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:09.797 [2024-12-05 20:58:17.684465] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:05:09.797 [2024-12-05 20:58:17.684507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1115871 ] 00:05:09.797 [2024-12-05 20:58:17.759601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.797 [2024-12-05 20:58:17.801671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.056 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.056 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:10.056 20:58:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.056 20:58:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:10.056 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:10.056 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:10.056 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.056 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.056 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.056 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.056 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.056 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.056 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.056 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:10.056 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:10.056 [2024-12-05 20:58:18.086567] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:05:10.056 [2024-12-05 20:58:18.086613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1115950 ] 00:05:10.056 [2024-12-05 20:58:18.160342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.315 [2024-12-05 20:58:18.200844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.315 [2024-12-05 20:58:18.200897] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:10.315 [2024-12-05 20:58:18.200906] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:10.315 [2024-12-05 20:58:18.200912] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:10.315 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:10.315 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:10.315 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:10.315 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:10.315 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:10.315 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:10.315 20:58:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:10.315 20:58:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1115871 00:05:10.315 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1115871 ']' 00:05:10.315 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1115871 00:05:10.315 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:10.315 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.315 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1115871 00:05:10.315 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.315 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.315 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1115871' 00:05:10.315 killing process with pid 1115871 00:05:10.315 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1115871 00:05:10.315 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1115871 00:05:10.575 00:05:10.575 real 0m0.966s 00:05:10.575 user 0m1.022s 00:05:10.575 sys 0m0.401s 00:05:10.575 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.575 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:10.575 ************************************ 00:05:10.575 END TEST exit_on_failed_rpc_init 00:05:10.575 ************************************ 00:05:10.575 20:58:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:10.575 00:05:10.575 real 0m13.150s 00:05:10.575 user 0m12.404s 00:05:10.575 sys 0m1.569s 00:05:10.575 20:58:18 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.575 20:58:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.575 ************************************ 00:05:10.575 END TEST skip_rpc 00:05:10.575 ************************************ 00:05:10.575 20:58:18 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:10.575 20:58:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.575 20:58:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.575 20:58:18 -- common/autotest_common.sh@10 -- # set +x 00:05:10.834 ************************************ 00:05:10.834 START TEST rpc_client 00:05:10.834 ************************************ 00:05:10.834 20:58:18 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:10.834 * Looking for test storage... 00:05:10.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:10.834 20:58:18 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:10.834 20:58:18 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:10.834 20:58:18 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.834 20:58:18 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.834 20:58:18 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:10.834 20:58:18 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.835 20:58:18 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.835 --rc genhtml_branch_coverage=1 00:05:10.835 --rc genhtml_function_coverage=1 00:05:10.835 --rc genhtml_legend=1 00:05:10.835 --rc geninfo_all_blocks=1 00:05:10.835 --rc geninfo_unexecuted_blocks=1 00:05:10.835 00:05:10.835 ' 00:05:10.835 20:58:18 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.835 --rc genhtml_branch_coverage=1 00:05:10.835 --rc genhtml_function_coverage=1 00:05:10.835 --rc genhtml_legend=1 00:05:10.835 --rc geninfo_all_blocks=1 00:05:10.835 --rc geninfo_unexecuted_blocks=1 00:05:10.835 00:05:10.835 ' 00:05:10.835 20:58:18 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.835 --rc genhtml_branch_coverage=1 00:05:10.835 --rc genhtml_function_coverage=1 00:05:10.835 --rc genhtml_legend=1 00:05:10.835 --rc geninfo_all_blocks=1 00:05:10.835 --rc geninfo_unexecuted_blocks=1 00:05:10.835 00:05:10.835 ' 00:05:10.835 20:58:18 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.835 --rc genhtml_branch_coverage=1 00:05:10.835 --rc genhtml_function_coverage=1 00:05:10.835 --rc genhtml_legend=1 00:05:10.835 --rc geninfo_all_blocks=1 00:05:10.835 --rc geninfo_unexecuted_blocks=1 00:05:10.835 00:05:10.835 ' 00:05:10.835 20:58:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:10.835 OK 00:05:10.835 20:58:18 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:10.835 00:05:10.835 real 0m0.197s 00:05:10.835 user 0m0.126s 00:05:10.835 sys 0m0.085s 00:05:10.835 20:58:18 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.835 20:58:18 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:10.835 ************************************ 00:05:10.835 END TEST rpc_client 00:05:10.835 ************************************ 00:05:10.835 20:58:18 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:10.835 20:58:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.835 20:58:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.835 20:58:18 -- common/autotest_common.sh@10 -- # set +x 00:05:11.094 ************************************ 00:05:11.094 START TEST json_config 00:05:11.094 ************************************ 00:05:11.094 20:58:18 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:11.094 20:58:19 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:11.094 20:58:19 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:11.094 20:58:19 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:11.094 20:58:19 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:11.094 20:58:19 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.094 20:58:19 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.095 20:58:19 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.095 20:58:19 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.095 20:58:19 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.095 20:58:19 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.095 20:58:19 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.095 20:58:19 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.095 20:58:19 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.095 20:58:19 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.095 20:58:19 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.095 20:58:19 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:11.095 20:58:19 json_config -- scripts/common.sh@345 -- # : 1 00:05:11.095 20:58:19 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.095 20:58:19 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.095 20:58:19 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:11.095 20:58:19 json_config -- scripts/common.sh@353 -- # local d=1 00:05:11.095 20:58:19 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.095 20:58:19 json_config -- scripts/common.sh@355 -- # echo 1 00:05:11.095 20:58:19 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.095 20:58:19 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:11.095 20:58:19 json_config -- scripts/common.sh@353 -- # local d=2 00:05:11.095 20:58:19 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.095 20:58:19 json_config -- scripts/common.sh@355 -- # echo 2 00:05:11.095 20:58:19 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.095 20:58:19 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.095 20:58:19 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.095 20:58:19 json_config -- scripts/common.sh@368 -- # return 0 00:05:11.095 20:58:19 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.095 20:58:19 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:11.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.095 --rc genhtml_branch_coverage=1 00:05:11.095 --rc genhtml_function_coverage=1 00:05:11.095 --rc genhtml_legend=1 00:05:11.095 --rc geninfo_all_blocks=1 00:05:11.095 --rc geninfo_unexecuted_blocks=1 00:05:11.095 00:05:11.095 ' 00:05:11.095 20:58:19 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:11.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.095 --rc genhtml_branch_coverage=1 00:05:11.095 --rc genhtml_function_coverage=1 00:05:11.095 --rc genhtml_legend=1 00:05:11.095 --rc geninfo_all_blocks=1 00:05:11.095 --rc geninfo_unexecuted_blocks=1 00:05:11.095 00:05:11.095 ' 00:05:11.095 20:58:19 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:11.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.095 --rc genhtml_branch_coverage=1 00:05:11.095 --rc genhtml_function_coverage=1 00:05:11.095 --rc genhtml_legend=1 00:05:11.095 --rc geninfo_all_blocks=1 00:05:11.095 --rc geninfo_unexecuted_blocks=1 00:05:11.095 00:05:11.095 ' 00:05:11.095 20:58:19 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:11.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.095 --rc genhtml_branch_coverage=1 00:05:11.095 --rc genhtml_function_coverage=1 00:05:11.095 --rc genhtml_legend=1 00:05:11.095 --rc geninfo_all_blocks=1 00:05:11.095 --rc geninfo_unexecuted_blocks=1 00:05:11.095 00:05:11.095 ' 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:11.095 20:58:19 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:11.095 20:58:19 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:11.095 20:58:19 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.095 20:58:19 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.095 20:58:19 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.095 20:58:19 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.095 20:58:19 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.095 20:58:19 json_config -- paths/export.sh@5 -- # export PATH 00:05:11.095 20:58:19 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@51 -- # : 0 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:11.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:11.095 20:58:19 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:11.095 INFO: JSON configuration test init 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:11.095 20:58:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.095 20:58:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:11.095 20:58:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.095 20:58:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.095 20:58:19 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:11.095 20:58:19 json_config -- json_config/common.sh@9 -- # local app=target 00:05:11.095 20:58:19 json_config -- json_config/common.sh@10 -- # shift 00:05:11.095 20:58:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:11.095 20:58:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:11.095 20:58:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:11.096 20:58:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.096 20:58:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.096 20:58:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1116240 00:05:11.096 20:58:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:11.096 Waiting for target to run... 00:05:11.096 20:58:19 json_config -- json_config/common.sh@25 -- # waitforlisten 1116240 /var/tmp/spdk_tgt.sock 00:05:11.096 20:58:19 json_config -- common/autotest_common.sh@835 -- # '[' -z 1116240 ']' 00:05:11.096 20:58:19 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.096 20:58:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:11.096 20:58:19 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.096 20:58:19 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.096 20:58:19 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.096 20:58:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.354 [2024-12-05 20:58:19.219110] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:05:11.355 [2024-12-05 20:58:19.219155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116240 ] 00:05:11.613 [2024-12-05 20:58:19.512741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.613 [2024-12-05 20:58:19.549590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.180 20:58:20 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.180 20:58:20 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:12.180 20:58:20 json_config -- json_config/common.sh@26 -- # echo '' 00:05:12.180 00:05:12.180 20:58:20 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:12.180 20:58:20 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:12.180 20:58:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.180 20:58:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.180 20:58:20 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:12.180 20:58:20 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:12.180 20:58:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.180 20:58:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.180 20:58:20 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:12.180 20:58:20 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:12.180 20:58:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:15.506 20:58:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.506 20:58:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:15.506 20:58:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@54 -- # sort 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:15.506 20:58:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:15.506 20:58:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:15.506 20:58:23 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:15.507 20:58:23 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:15.507 20:58:23 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:15.507 20:58:23 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:15.507 20:58:23 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:15.507 20:58:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.507 20:58:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.507 20:58:23 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:15.507 20:58:23 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:15.507 20:58:23 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:15.507 20:58:23 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:15.507 20:58:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:15.507 MallocForNvmf0 00:05:15.764 20:58:23 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:15.764 20:58:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:15.764 MallocForNvmf1 00:05:15.764 20:58:23 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:15.764 20:58:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:16.023 [2024-12-05 20:58:24.009980] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:16.023 20:58:24 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:16.023 20:58:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:16.282 20:58:24 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:16.282 20:58:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:16.540 20:58:24 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:16.540 20:58:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:16.540 20:58:24 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:16.540 20:58:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:16.798 [2024-12-05 20:58:24.780393] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:16.798 20:58:24 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:16.798 20:58:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.798 20:58:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.798 20:58:24 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:16.798 20:58:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.798 20:58:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.798 20:58:24 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:16.798 20:58:24 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:16.798 20:58:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:17.057 MallocBdevForConfigChangeCheck 00:05:17.057 20:58:25 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:17.057 20:58:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.057 20:58:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.057 20:58:25 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:17.057 20:58:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:17.623 20:58:25 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:17.623 INFO: shutting down applications... 00:05:17.623 20:58:25 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:17.623 20:58:25 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:17.623 20:58:25 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:17.623 20:58:25 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:19.523 Calling clear_iscsi_subsystem 00:05:19.523 Calling clear_nvmf_subsystem 00:05:19.523 Calling clear_nbd_subsystem 00:05:19.523 Calling clear_ublk_subsystem 00:05:19.523 Calling clear_vhost_blk_subsystem 00:05:19.523 Calling clear_vhost_scsi_subsystem 00:05:19.523 Calling clear_bdev_subsystem 00:05:19.523 20:58:27 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:19.523 20:58:27 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:19.523 20:58:27 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:19.523 20:58:27 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.523 20:58:27 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:19.523 20:58:27 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:20.091 20:58:27 json_config -- json_config/json_config.sh@352 -- # break 00:05:20.091 20:58:27 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:20.091 20:58:27 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:20.091 20:58:27 json_config -- json_config/common.sh@31 -- # local app=target 00:05:20.091 20:58:27 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:20.091 20:58:27 json_config -- json_config/common.sh@35 -- # [[ -n 1116240 ]] 00:05:20.091 20:58:27 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1116240 00:05:20.091 20:58:27 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:20.091 20:58:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.091 20:58:27 json_config -- json_config/common.sh@41 -- # kill -0 1116240 00:05:20.091 20:58:27 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.660 20:58:28 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.660 20:58:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.660 20:58:28 json_config -- json_config/common.sh@41 -- # kill -0 1116240 00:05:20.660 20:58:28 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:20.660 20:58:28 json_config -- json_config/common.sh@43 -- # break 00:05:20.660 20:58:28 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:20.661 20:58:28 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:20.661 SPDK target shutdown done 00:05:20.661 20:58:28 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:20.661 INFO: relaunching applications... 00:05:20.661 20:58:28 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.661 20:58:28 json_config -- json_config/common.sh@9 -- # local app=target 00:05:20.661 20:58:28 json_config -- json_config/common.sh@10 -- # shift 00:05:20.661 20:58:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:20.661 20:58:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:20.661 20:58:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:20.661 20:58:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.661 20:58:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.661 20:58:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1117978 00:05:20.661 20:58:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:20.661 Waiting for target to run... 00:05:20.661 20:58:28 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.661 20:58:28 json_config -- json_config/common.sh@25 -- # waitforlisten 1117978 /var/tmp/spdk_tgt.sock 00:05:20.661 20:58:28 json_config -- common/autotest_common.sh@835 -- # '[' -z 1117978 ']' 00:05:20.661 20:58:28 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:20.661 20:58:28 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.661 20:58:28 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:20.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:20.661 20:58:28 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.661 20:58:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.661 [2024-12-05 20:58:28.536074] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:05:20.661 [2024-12-05 20:58:28.536127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1117978 ] 00:05:20.921 [2024-12-05 20:58:28.838360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.921 [2024-12-05 20:58:28.872918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.205 [2024-12-05 20:58:31.903352] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.205 [2024-12-05 20:58:31.935713] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:24.205 20:58:31 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.205 20:58:31 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:24.205 20:58:31 json_config -- json_config/common.sh@26 -- # echo '' 00:05:24.205 00:05:24.205 20:58:31 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:24.205 20:58:31 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:24.205 INFO: Checking if target configuration is the same... 00:05:24.205 20:58:31 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.205 20:58:31 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:24.205 20:58:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.205 + '[' 2 -ne 2 ']' 00:05:24.205 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:24.205 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:24.205 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:24.205 +++ basename /dev/fd/62 00:05:24.205 ++ mktemp /tmp/62.XXX 00:05:24.205 + tmp_file_1=/tmp/62.sUa 00:05:24.205 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.205 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:24.205 + tmp_file_2=/tmp/spdk_tgt_config.json.1LB 00:05:24.205 + ret=0 00:05:24.205 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.463 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.463 + diff -u /tmp/62.sUa /tmp/spdk_tgt_config.json.1LB 00:05:24.463 + echo 'INFO: JSON config files are the same' 00:05:24.463 INFO: JSON config files are the same 00:05:24.463 + rm /tmp/62.sUa /tmp/spdk_tgt_config.json.1LB 00:05:24.463 + exit 0 00:05:24.463 20:58:32 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:24.463 20:58:32 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:24.463 INFO: changing configuration and checking if this can be detected... 00:05:24.463 20:58:32 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:24.463 20:58:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:24.463 20:58:32 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:24.463 20:58:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.463 20:58:32 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.721 + '[' 2 -ne 2 ']' 00:05:24.721 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:24.721 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:24.721 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:24.721 +++ basename /dev/fd/62 00:05:24.721 ++ mktemp /tmp/62.XXX 00:05:24.721 + tmp_file_1=/tmp/62.g7n 00:05:24.721 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.721 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:24.721 + tmp_file_2=/tmp/spdk_tgt_config.json.QZi 00:05:24.721 + ret=0 00:05:24.721 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.980 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.980 + diff -u /tmp/62.g7n /tmp/spdk_tgt_config.json.QZi 00:05:24.980 + ret=1 00:05:24.980 + echo '=== Start of file: /tmp/62.g7n ===' 00:05:24.980 + cat /tmp/62.g7n 00:05:24.980 + echo '=== End of file: /tmp/62.g7n ===' 00:05:24.980 + echo '' 00:05:24.980 + echo '=== Start of file: /tmp/spdk_tgt_config.json.QZi ===' 00:05:24.980 + cat /tmp/spdk_tgt_config.json.QZi 00:05:24.980 + echo '=== End of file: /tmp/spdk_tgt_config.json.QZi ===' 00:05:24.980 + echo '' 00:05:24.980 + rm /tmp/62.g7n /tmp/spdk_tgt_config.json.QZi 00:05:24.980 + exit 1 00:05:24.980 20:58:32 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:24.980 INFO: configuration change detected. 00:05:24.980 20:58:32 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:24.980 20:58:32 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:24.980 20:58:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.980 20:58:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.980 20:58:32 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:24.980 20:58:32 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:24.980 20:58:32 json_config -- json_config/json_config.sh@324 -- # [[ -n 1117978 ]] 00:05:24.980 20:58:32 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:24.980 20:58:32 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:24.980 20:58:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.980 20:58:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.980 20:58:32 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:24.980 20:58:32 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:24.980 20:58:32 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:24.980 20:58:32 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:24.980 20:58:32 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:24.980 20:58:32 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:24.980 20:58:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.980 20:58:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.980 20:58:33 json_config -- json_config/json_config.sh@330 -- # killprocess 1117978 00:05:24.980 20:58:33 json_config -- common/autotest_common.sh@954 -- # '[' -z 1117978 ']' 00:05:24.980 20:58:33 json_config -- common/autotest_common.sh@958 -- # kill -0 1117978 00:05:24.980 20:58:33 json_config -- common/autotest_common.sh@959 -- # uname 00:05:24.981 20:58:33 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.981 20:58:33 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1117978 00:05:24.981 20:58:33 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.981 20:58:33 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.981 20:58:33 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1117978' 00:05:24.981 killing process with pid 1117978 00:05:24.981 20:58:33 json_config -- common/autotest_common.sh@973 -- # kill 1117978 00:05:24.981 20:58:33 json_config -- common/autotest_common.sh@978 -- # wait 1117978 00:05:27.515 20:58:35 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.515 20:58:35 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:27.515 20:58:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:27.515 20:58:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.515 20:58:35 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:27.515 20:58:35 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:27.515 INFO: Success 00:05:27.515 00:05:27.515 real 0m16.239s 00:05:27.515 user 0m16.864s 00:05:27.515 sys 0m2.355s 00:05:27.515 20:58:35 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.515 20:58:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.515 ************************************ 00:05:27.515 END TEST json_config 00:05:27.515 ************************************ 00:05:27.515 20:58:35 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:27.515 20:58:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.515 20:58:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.515 20:58:35 -- common/autotest_common.sh@10 -- # set +x 00:05:27.515 ************************************ 00:05:27.515 START TEST json_config_extra_key 00:05:27.515 ************************************ 00:05:27.515 20:58:35 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:27.515 20:58:35 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:27.515 20:58:35 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:27.515 20:58:35 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:27.515 20:58:35 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.515 20:58:35 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:27.515 20:58:35 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.515 20:58:35 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:27.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.515 --rc genhtml_branch_coverage=1 00:05:27.515 --rc genhtml_function_coverage=1 00:05:27.515 --rc genhtml_legend=1 00:05:27.515 --rc geninfo_all_blocks=1 00:05:27.515 --rc geninfo_unexecuted_blocks=1 00:05:27.515 00:05:27.515 ' 00:05:27.515 20:58:35 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:27.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.515 --rc genhtml_branch_coverage=1 00:05:27.515 --rc genhtml_function_coverage=1 00:05:27.515 --rc genhtml_legend=1 00:05:27.515 --rc geninfo_all_blocks=1 00:05:27.515 --rc geninfo_unexecuted_blocks=1 00:05:27.515 00:05:27.515 ' 00:05:27.515 20:58:35 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:27.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.515 --rc genhtml_branch_coverage=1 00:05:27.515 --rc genhtml_function_coverage=1 00:05:27.515 --rc genhtml_legend=1 00:05:27.515 --rc geninfo_all_blocks=1 00:05:27.515 --rc geninfo_unexecuted_blocks=1 00:05:27.515 00:05:27.515 ' 00:05:27.515 20:58:35 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:27.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.515 --rc genhtml_branch_coverage=1 00:05:27.515 --rc genhtml_function_coverage=1 00:05:27.515 --rc genhtml_legend=1 00:05:27.515 --rc geninfo_all_blocks=1 00:05:27.515 --rc geninfo_unexecuted_blocks=1 00:05:27.515 00:05:27.515 ' 00:05:27.515 20:58:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.515 20:58:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:27.515 20:58:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.515 20:58:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.515 20:58:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.515 20:58:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.515 20:58:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.515 20:58:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:27.516 20:58:35 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:27.516 20:58:35 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.516 20:58:35 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.516 20:58:35 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.516 20:58:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.516 20:58:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.516 20:58:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.516 20:58:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:27.516 20:58:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:27.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:27.516 20:58:35 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:27.516 20:58:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:27.516 20:58:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:27.516 20:58:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:27.516 20:58:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:27.516 20:58:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:27.516 20:58:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:27.516 20:58:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:27.516 20:58:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:27.516 20:58:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:27.516 20:58:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:27.516 20:58:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:27.516 INFO: launching applications... 00:05:27.516 20:58:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:27.516 20:58:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:27.516 20:58:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:27.516 20:58:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:27.516 20:58:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:27.516 20:58:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:27.516 20:58:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.516 20:58:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.516 20:58:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1119256 00:05:27.516 20:58:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:27.516 Waiting for target to run... 00:05:27.516 20:58:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1119256 /var/tmp/spdk_tgt.sock 00:05:27.516 20:58:35 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1119256 ']' 00:05:27.516 20:58:35 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:27.516 20:58:35 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:27.516 20:58:35 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.516 20:58:35 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:27.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:27.516 20:58:35 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.516 20:58:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:27.516 [2024-12-05 20:58:35.515099] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:05:27.516 [2024-12-05 20:58:35.515148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119256 ] 00:05:27.775 [2024-12-05 20:58:35.798942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.775 [2024-12-05 20:58:35.830409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.342 20:58:36 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.342 20:58:36 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:28.342 20:58:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:28.342 00:05:28.342 20:58:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:28.342 INFO: shutting down applications... 00:05:28.342 20:58:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:28.342 20:58:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:28.342 20:58:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:28.342 20:58:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1119256 ]] 00:05:28.342 20:58:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1119256 00:05:28.342 20:58:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:28.342 20:58:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.342 20:58:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1119256 00:05:28.342 20:58:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.910 20:58:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.910 20:58:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.910 20:58:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1119256 00:05:28.910 20:58:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:28.910 20:58:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:28.910 20:58:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:28.910 20:58:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:28.910 SPDK target shutdown done 00:05:28.910 20:58:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:28.910 Success 00:05:28.910 00:05:28.910 real 0m1.552s 00:05:28.910 user 0m1.335s 00:05:28.910 sys 0m0.386s 00:05:28.910 20:58:36 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.910 20:58:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:28.910 ************************************ 00:05:28.910 END TEST json_config_extra_key 00:05:28.910 ************************************ 00:05:28.910 20:58:36 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:28.910 20:58:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.910 20:58:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.910 20:58:36 -- common/autotest_common.sh@10 -- # set +x 00:05:28.910 ************************************ 00:05:28.910 START TEST alias_rpc 00:05:28.910 ************************************ 00:05:28.910 20:58:36 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:28.910 * Looking for test storage... 00:05:28.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:28.910 20:58:36 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.910 20:58:36 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.910 20:58:36 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:29.169 20:58:37 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.169 20:58:37 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:29.169 20:58:37 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.169 20:58:37 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:29.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.169 --rc genhtml_branch_coverage=1 00:05:29.169 --rc genhtml_function_coverage=1 00:05:29.169 --rc genhtml_legend=1 00:05:29.169 --rc geninfo_all_blocks=1 00:05:29.169 --rc geninfo_unexecuted_blocks=1 00:05:29.169 00:05:29.169 ' 00:05:29.169 20:58:37 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:29.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.169 --rc genhtml_branch_coverage=1 00:05:29.169 --rc genhtml_function_coverage=1 00:05:29.169 --rc genhtml_legend=1 00:05:29.169 --rc geninfo_all_blocks=1 00:05:29.169 --rc geninfo_unexecuted_blocks=1 00:05:29.169 00:05:29.169 ' 00:05:29.169 20:58:37 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:29.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.169 --rc genhtml_branch_coverage=1 00:05:29.169 --rc genhtml_function_coverage=1 00:05:29.169 --rc genhtml_legend=1 00:05:29.169 --rc geninfo_all_blocks=1 00:05:29.169 --rc geninfo_unexecuted_blocks=1 00:05:29.169 00:05:29.169 ' 00:05:29.169 20:58:37 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:29.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.169 --rc genhtml_branch_coverage=1 00:05:29.169 --rc genhtml_function_coverage=1 00:05:29.169 --rc genhtml_legend=1 00:05:29.169 --rc geninfo_all_blocks=1 00:05:29.169 --rc geninfo_unexecuted_blocks=1 00:05:29.169 00:05:29.169 ' 00:05:29.169 20:58:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:29.169 20:58:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1119541 00:05:29.169 20:58:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1119541 00:05:29.169 20:58:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.169 20:58:37 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1119541 ']' 00:05:29.169 20:58:37 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.169 20:58:37 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.169 20:58:37 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.169 20:58:37 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.169 20:58:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.169 [2024-12-05 20:58:37.129990] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:05:29.169 [2024-12-05 20:58:37.130039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119541 ] 00:05:29.169 [2024-12-05 20:58:37.204021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.169 [2024-12-05 20:58:37.245904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.427 20:58:37 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.427 20:58:37 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:29.427 20:58:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:29.686 20:58:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1119541 00:05:29.686 20:58:37 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1119541 ']' 00:05:29.686 20:58:37 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1119541 00:05:29.686 20:58:37 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:29.686 20:58:37 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.686 20:58:37 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1119541 00:05:29.686 20:58:37 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.686 20:58:37 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.686 20:58:37 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1119541' 00:05:29.686 killing process with pid 1119541 00:05:29.686 20:58:37 alias_rpc -- common/autotest_common.sh@973 -- # kill 1119541 00:05:29.686 20:58:37 alias_rpc -- common/autotest_common.sh@978 -- # wait 1119541 00:05:29.944 00:05:29.944 real 0m1.132s 00:05:29.944 user 0m1.148s 00:05:29.944 sys 0m0.412s 00:05:29.945 20:58:38 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.945 20:58:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.945 ************************************ 00:05:29.945 END TEST alias_rpc 00:05:29.945 ************************************ 00:05:30.203 20:58:38 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:30.203 20:58:38 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:30.203 20:58:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.203 20:58:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.203 20:58:38 -- common/autotest_common.sh@10 -- # set +x 00:05:30.203 ************************************ 00:05:30.203 START TEST spdkcli_tcp 00:05:30.203 ************************************ 00:05:30.203 20:58:38 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:30.203 * Looking for test storage... 00:05:30.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:30.203 20:58:38 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:30.203 20:58:38 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:30.203 20:58:38 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:30.203 20:58:38 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.203 20:58:38 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.204 20:58:38 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:30.204 20:58:38 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.204 20:58:38 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:30.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.204 --rc genhtml_branch_coverage=1 00:05:30.204 --rc genhtml_function_coverage=1 00:05:30.204 --rc genhtml_legend=1 00:05:30.204 --rc geninfo_all_blocks=1 00:05:30.204 --rc geninfo_unexecuted_blocks=1 00:05:30.204 00:05:30.204 ' 00:05:30.204 20:58:38 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:30.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.204 --rc genhtml_branch_coverage=1 00:05:30.204 --rc genhtml_function_coverage=1 00:05:30.204 --rc genhtml_legend=1 00:05:30.204 --rc geninfo_all_blocks=1 00:05:30.204 --rc geninfo_unexecuted_blocks=1 00:05:30.204 00:05:30.204 ' 00:05:30.204 20:58:38 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:30.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.204 --rc genhtml_branch_coverage=1 00:05:30.204 --rc genhtml_function_coverage=1 00:05:30.204 --rc genhtml_legend=1 00:05:30.204 --rc geninfo_all_blocks=1 00:05:30.204 --rc geninfo_unexecuted_blocks=1 00:05:30.204 00:05:30.204 ' 00:05:30.204 20:58:38 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:30.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.204 --rc genhtml_branch_coverage=1 00:05:30.204 --rc genhtml_function_coverage=1 00:05:30.204 --rc genhtml_legend=1 00:05:30.204 --rc geninfo_all_blocks=1 00:05:30.204 --rc geninfo_unexecuted_blocks=1 00:05:30.204 00:05:30.204 ' 00:05:30.204 20:58:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:30.204 20:58:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:30.204 20:58:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:30.204 20:58:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:30.204 20:58:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:30.204 20:58:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:30.204 20:58:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:30.204 20:58:38 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.204 20:58:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.204 20:58:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1119828 00:05:30.204 20:58:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1119828 00:05:30.204 20:58:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:30.204 20:58:38 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1119828 ']' 00:05:30.204 20:58:38 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.204 20:58:38 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.204 20:58:38 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.204 20:58:38 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.204 20:58:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.462 [2024-12-05 20:58:38.338048] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:05:30.462 [2024-12-05 20:58:38.338098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119828 ] 00:05:30.462 [2024-12-05 20:58:38.412309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.462 [2024-12-05 20:58:38.452758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.462 [2024-12-05 20:58:38.452759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.397 20:58:39 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.397 20:58:39 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:31.397 20:58:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:31.397 20:58:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1120059 00:05:31.397 20:58:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:31.397 [ 00:05:31.397 "bdev_malloc_delete", 00:05:31.397 "bdev_malloc_create", 00:05:31.397 "bdev_null_resize", 00:05:31.397 "bdev_null_delete", 00:05:31.397 "bdev_null_create", 00:05:31.397 "bdev_nvme_cuse_unregister", 00:05:31.397 "bdev_nvme_cuse_register", 00:05:31.397 "bdev_opal_new_user", 00:05:31.397 "bdev_opal_set_lock_state", 00:05:31.397 "bdev_opal_delete", 00:05:31.397 "bdev_opal_get_info", 00:05:31.397 "bdev_opal_create", 00:05:31.397 "bdev_nvme_opal_revert", 00:05:31.397 "bdev_nvme_opal_init", 00:05:31.397 "bdev_nvme_send_cmd", 00:05:31.397 "bdev_nvme_set_keys", 00:05:31.397 "bdev_nvme_get_path_iostat", 00:05:31.397 "bdev_nvme_get_mdns_discovery_info", 00:05:31.397 "bdev_nvme_stop_mdns_discovery", 00:05:31.397 "bdev_nvme_start_mdns_discovery", 00:05:31.397 "bdev_nvme_set_multipath_policy", 00:05:31.397 "bdev_nvme_set_preferred_path", 00:05:31.397 "bdev_nvme_get_io_paths", 00:05:31.397 "bdev_nvme_remove_error_injection", 00:05:31.397 "bdev_nvme_add_error_injection", 00:05:31.397 "bdev_nvme_get_discovery_info", 00:05:31.397 "bdev_nvme_stop_discovery", 00:05:31.397 "bdev_nvme_start_discovery", 00:05:31.397 "bdev_nvme_get_controller_health_info", 00:05:31.397 "bdev_nvme_disable_controller", 00:05:31.397 "bdev_nvme_enable_controller", 00:05:31.397 "bdev_nvme_reset_controller", 00:05:31.397 "bdev_nvme_get_transport_statistics", 00:05:31.397 "bdev_nvme_apply_firmware", 00:05:31.397 "bdev_nvme_detach_controller", 00:05:31.397 "bdev_nvme_get_controllers", 00:05:31.397 "bdev_nvme_attach_controller", 00:05:31.397 "bdev_nvme_set_hotplug", 00:05:31.397 "bdev_nvme_set_options", 00:05:31.397 "bdev_passthru_delete", 00:05:31.397 "bdev_passthru_create", 00:05:31.397 "bdev_lvol_set_parent_bdev", 00:05:31.397 "bdev_lvol_set_parent", 00:05:31.397 "bdev_lvol_check_shallow_copy", 00:05:31.397 "bdev_lvol_start_shallow_copy", 00:05:31.397 "bdev_lvol_grow_lvstore", 00:05:31.397 "bdev_lvol_get_lvols", 00:05:31.397 "bdev_lvol_get_lvstores", 00:05:31.397 "bdev_lvol_delete", 00:05:31.397 "bdev_lvol_set_read_only", 00:05:31.397 "bdev_lvol_resize", 00:05:31.397 "bdev_lvol_decouple_parent", 00:05:31.397 "bdev_lvol_inflate", 00:05:31.397 "bdev_lvol_rename", 00:05:31.397 "bdev_lvol_clone_bdev", 00:05:31.397 "bdev_lvol_clone", 00:05:31.397 "bdev_lvol_snapshot", 00:05:31.397 "bdev_lvol_create", 00:05:31.397 "bdev_lvol_delete_lvstore", 00:05:31.397 "bdev_lvol_rename_lvstore", 00:05:31.397 "bdev_lvol_create_lvstore", 00:05:31.397 "bdev_raid_set_options", 00:05:31.397 "bdev_raid_remove_base_bdev", 00:05:31.397 "bdev_raid_add_base_bdev", 00:05:31.397 "bdev_raid_delete", 00:05:31.397 "bdev_raid_create", 00:05:31.397 "bdev_raid_get_bdevs", 00:05:31.397 "bdev_error_inject_error", 00:05:31.397 "bdev_error_delete", 00:05:31.397 "bdev_error_create", 00:05:31.397 "bdev_split_delete", 00:05:31.397 "bdev_split_create", 00:05:31.397 "bdev_delay_delete", 00:05:31.397 "bdev_delay_create", 00:05:31.397 "bdev_delay_update_latency", 00:05:31.397 "bdev_zone_block_delete", 00:05:31.397 "bdev_zone_block_create", 00:05:31.397 "blobfs_create", 00:05:31.397 "blobfs_detect", 00:05:31.397 "blobfs_set_cache_size", 00:05:31.397 "bdev_aio_delete", 00:05:31.397 "bdev_aio_rescan", 00:05:31.397 "bdev_aio_create", 00:05:31.397 "bdev_ftl_set_property", 00:05:31.397 "bdev_ftl_get_properties", 00:05:31.397 "bdev_ftl_get_stats", 00:05:31.397 "bdev_ftl_unmap", 00:05:31.397 "bdev_ftl_unload", 00:05:31.397 "bdev_ftl_delete", 00:05:31.397 "bdev_ftl_load", 00:05:31.397 "bdev_ftl_create", 00:05:31.397 "bdev_virtio_attach_controller", 00:05:31.397 "bdev_virtio_scsi_get_devices", 00:05:31.397 "bdev_virtio_detach_controller", 00:05:31.397 "bdev_virtio_blk_set_hotplug", 00:05:31.397 "bdev_iscsi_delete", 00:05:31.397 "bdev_iscsi_create", 00:05:31.397 "bdev_iscsi_set_options", 00:05:31.397 "accel_error_inject_error", 00:05:31.397 "ioat_scan_accel_module", 00:05:31.397 "dsa_scan_accel_module", 00:05:31.397 "iaa_scan_accel_module", 00:05:31.397 "vfu_virtio_create_fs_endpoint", 00:05:31.397 "vfu_virtio_create_scsi_endpoint", 00:05:31.397 "vfu_virtio_scsi_remove_target", 00:05:31.397 "vfu_virtio_scsi_add_target", 00:05:31.397 "vfu_virtio_create_blk_endpoint", 00:05:31.397 "vfu_virtio_delete_endpoint", 00:05:31.397 "keyring_file_remove_key", 00:05:31.397 "keyring_file_add_key", 00:05:31.397 "keyring_linux_set_options", 00:05:31.397 "fsdev_aio_delete", 00:05:31.397 "fsdev_aio_create", 00:05:31.397 "iscsi_get_histogram", 00:05:31.397 "iscsi_enable_histogram", 00:05:31.397 "iscsi_set_options", 00:05:31.397 "iscsi_get_auth_groups", 00:05:31.397 "iscsi_auth_group_remove_secret", 00:05:31.397 "iscsi_auth_group_add_secret", 00:05:31.397 "iscsi_delete_auth_group", 00:05:31.397 "iscsi_create_auth_group", 00:05:31.397 "iscsi_set_discovery_auth", 00:05:31.397 "iscsi_get_options", 00:05:31.397 "iscsi_target_node_request_logout", 00:05:31.397 "iscsi_target_node_set_redirect", 00:05:31.397 "iscsi_target_node_set_auth", 00:05:31.397 "iscsi_target_node_add_lun", 00:05:31.397 "iscsi_get_stats", 00:05:31.397 "iscsi_get_connections", 00:05:31.397 "iscsi_portal_group_set_auth", 00:05:31.397 "iscsi_start_portal_group", 00:05:31.397 "iscsi_delete_portal_group", 00:05:31.397 "iscsi_create_portal_group", 00:05:31.397 "iscsi_get_portal_groups", 00:05:31.397 "iscsi_delete_target_node", 00:05:31.397 "iscsi_target_node_remove_pg_ig_maps", 00:05:31.397 "iscsi_target_node_add_pg_ig_maps", 00:05:31.397 "iscsi_create_target_node", 00:05:31.397 "iscsi_get_target_nodes", 00:05:31.397 "iscsi_delete_initiator_group", 00:05:31.397 "iscsi_initiator_group_remove_initiators", 00:05:31.397 "iscsi_initiator_group_add_initiators", 00:05:31.397 "iscsi_create_initiator_group", 00:05:31.397 "iscsi_get_initiator_groups", 00:05:31.397 "nvmf_set_crdt", 00:05:31.397 "nvmf_set_config", 00:05:31.397 "nvmf_set_max_subsystems", 00:05:31.397 "nvmf_stop_mdns_prr", 00:05:31.397 "nvmf_publish_mdns_prr", 00:05:31.397 "nvmf_subsystem_get_listeners", 00:05:31.397 "nvmf_subsystem_get_qpairs", 00:05:31.397 "nvmf_subsystem_get_controllers", 00:05:31.397 "nvmf_get_stats", 00:05:31.397 "nvmf_get_transports", 00:05:31.397 "nvmf_create_transport", 00:05:31.397 "nvmf_get_targets", 00:05:31.397 "nvmf_delete_target", 00:05:31.397 "nvmf_create_target", 00:05:31.397 "nvmf_subsystem_allow_any_host", 00:05:31.397 "nvmf_subsystem_set_keys", 00:05:31.397 "nvmf_subsystem_remove_host", 00:05:31.397 "nvmf_subsystem_add_host", 00:05:31.397 "nvmf_ns_remove_host", 00:05:31.397 "nvmf_ns_add_host", 00:05:31.397 "nvmf_subsystem_remove_ns", 00:05:31.397 "nvmf_subsystem_set_ns_ana_group", 00:05:31.397 "nvmf_subsystem_add_ns", 00:05:31.397 "nvmf_subsystem_listener_set_ana_state", 00:05:31.397 "nvmf_discovery_get_referrals", 00:05:31.397 "nvmf_discovery_remove_referral", 00:05:31.397 "nvmf_discovery_add_referral", 00:05:31.397 "nvmf_subsystem_remove_listener", 00:05:31.397 "nvmf_subsystem_add_listener", 00:05:31.397 "nvmf_delete_subsystem", 00:05:31.398 "nvmf_create_subsystem", 00:05:31.398 "nvmf_get_subsystems", 00:05:31.398 "env_dpdk_get_mem_stats", 00:05:31.398 "nbd_get_disks", 00:05:31.398 "nbd_stop_disk", 00:05:31.398 "nbd_start_disk", 00:05:31.398 "ublk_recover_disk", 00:05:31.398 "ublk_get_disks", 00:05:31.398 "ublk_stop_disk", 00:05:31.398 "ublk_start_disk", 00:05:31.398 "ublk_destroy_target", 00:05:31.398 "ublk_create_target", 00:05:31.398 "virtio_blk_create_transport", 00:05:31.398 "virtio_blk_get_transports", 00:05:31.398 "vhost_controller_set_coalescing", 00:05:31.398 "vhost_get_controllers", 00:05:31.398 "vhost_delete_controller", 00:05:31.398 "vhost_create_blk_controller", 00:05:31.398 "vhost_scsi_controller_remove_target", 00:05:31.398 "vhost_scsi_controller_add_target", 00:05:31.398 "vhost_start_scsi_controller", 00:05:31.398 "vhost_create_scsi_controller", 00:05:31.398 "thread_set_cpumask", 00:05:31.398 "scheduler_set_options", 00:05:31.398 "framework_get_governor", 00:05:31.398 "framework_get_scheduler", 00:05:31.398 "framework_set_scheduler", 00:05:31.398 "framework_get_reactors", 00:05:31.398 "thread_get_io_channels", 00:05:31.398 "thread_get_pollers", 00:05:31.398 "thread_get_stats", 00:05:31.398 "framework_monitor_context_switch", 00:05:31.398 "spdk_kill_instance", 00:05:31.398 "log_enable_timestamps", 00:05:31.398 "log_get_flags", 00:05:31.398 "log_clear_flag", 00:05:31.398 "log_set_flag", 00:05:31.398 "log_get_level", 00:05:31.398 "log_set_level", 00:05:31.398 "log_get_print_level", 00:05:31.398 "log_set_print_level", 00:05:31.398 "framework_enable_cpumask_locks", 00:05:31.398 "framework_disable_cpumask_locks", 00:05:31.398 "framework_wait_init", 00:05:31.398 "framework_start_init", 00:05:31.398 "scsi_get_devices", 00:05:31.398 "bdev_get_histogram", 00:05:31.398 "bdev_enable_histogram", 00:05:31.398 "bdev_set_qos_limit", 00:05:31.398 "bdev_set_qd_sampling_period", 00:05:31.398 "bdev_get_bdevs", 00:05:31.398 "bdev_reset_iostat", 00:05:31.398 "bdev_get_iostat", 00:05:31.398 "bdev_examine", 00:05:31.398 "bdev_wait_for_examine", 00:05:31.398 "bdev_set_options", 00:05:31.398 "accel_get_stats", 00:05:31.398 "accel_set_options", 00:05:31.398 "accel_set_driver", 00:05:31.398 "accel_crypto_key_destroy", 00:05:31.398 "accel_crypto_keys_get", 00:05:31.398 "accel_crypto_key_create", 00:05:31.398 "accel_assign_opc", 00:05:31.398 "accel_get_module_info", 00:05:31.398 "accel_get_opc_assignments", 00:05:31.398 "vmd_rescan", 00:05:31.398 "vmd_remove_device", 00:05:31.398 "vmd_enable", 00:05:31.398 "sock_get_default_impl", 00:05:31.398 "sock_set_default_impl", 00:05:31.398 "sock_impl_set_options", 00:05:31.398 "sock_impl_get_options", 00:05:31.398 "iobuf_get_stats", 00:05:31.398 "iobuf_set_options", 00:05:31.398 "keyring_get_keys", 00:05:31.398 "vfu_tgt_set_base_path", 00:05:31.398 "framework_get_pci_devices", 00:05:31.398 "framework_get_config", 00:05:31.398 "framework_get_subsystems", 00:05:31.398 "fsdev_set_opts", 00:05:31.398 "fsdev_get_opts", 00:05:31.398 "trace_get_info", 00:05:31.398 "trace_get_tpoint_group_mask", 00:05:31.398 "trace_disable_tpoint_group", 00:05:31.398 "trace_enable_tpoint_group", 00:05:31.398 "trace_clear_tpoint_mask", 00:05:31.398 "trace_set_tpoint_mask", 00:05:31.398 "notify_get_notifications", 00:05:31.398 "notify_get_types", 00:05:31.398 "spdk_get_version", 00:05:31.398 "rpc_get_methods" 00:05:31.398 ] 00:05:31.398 20:58:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:31.398 20:58:39 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:31.398 20:58:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.398 20:58:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:31.398 20:58:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1119828 00:05:31.398 20:58:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1119828 ']' 00:05:31.398 20:58:39 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1119828 00:05:31.398 20:58:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:31.398 20:58:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.398 20:58:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1119828 00:05:31.398 20:58:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.398 20:58:39 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.398 20:58:39 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1119828' 00:05:31.398 killing process with pid 1119828 00:05:31.398 20:58:39 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1119828 00:05:31.398 20:58:39 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1119828 00:05:31.656 00:05:31.656 real 0m1.639s 00:05:31.656 user 0m3.063s 00:05:31.656 sys 0m0.459s 00:05:31.656 20:58:39 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.656 20:58:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.656 ************************************ 00:05:31.656 END TEST spdkcli_tcp 00:05:31.656 ************************************ 00:05:31.915 20:58:39 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.915 20:58:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.915 20:58:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.915 20:58:39 -- common/autotest_common.sh@10 -- # set +x 00:05:31.915 ************************************ 00:05:31.915 START TEST dpdk_mem_utility 00:05:31.915 ************************************ 00:05:31.915 20:58:39 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.915 * Looking for test storage... 00:05:31.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:31.915 20:58:39 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.915 20:58:39 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.915 20:58:39 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.915 20:58:39 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.915 20:58:39 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:31.915 20:58:39 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.915 20:58:39 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.915 --rc genhtml_branch_coverage=1 00:05:31.915 --rc genhtml_function_coverage=1 00:05:31.915 --rc genhtml_legend=1 00:05:31.915 --rc geninfo_all_blocks=1 00:05:31.915 --rc geninfo_unexecuted_blocks=1 00:05:31.915 00:05:31.915 ' 00:05:31.915 20:58:39 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.915 --rc genhtml_branch_coverage=1 00:05:31.915 --rc genhtml_function_coverage=1 00:05:31.915 --rc genhtml_legend=1 00:05:31.915 --rc geninfo_all_blocks=1 00:05:31.915 --rc geninfo_unexecuted_blocks=1 00:05:31.915 00:05:31.915 ' 00:05:31.915 20:58:39 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.915 --rc genhtml_branch_coverage=1 00:05:31.915 --rc genhtml_function_coverage=1 00:05:31.915 --rc genhtml_legend=1 00:05:31.915 --rc geninfo_all_blocks=1 00:05:31.915 --rc geninfo_unexecuted_blocks=1 00:05:31.915 00:05:31.915 ' 00:05:31.915 20:58:39 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.915 --rc genhtml_branch_coverage=1 00:05:31.915 --rc genhtml_function_coverage=1 00:05:31.915 --rc genhtml_legend=1 00:05:31.915 --rc geninfo_all_blocks=1 00:05:31.915 --rc geninfo_unexecuted_blocks=1 00:05:31.915 00:05:31.915 ' 00:05:31.915 20:58:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:31.915 20:58:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.915 20:58:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1120207 00:05:31.915 20:58:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1120207 00:05:31.915 20:58:39 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1120207 ']' 00:05:31.915 20:58:39 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.915 20:58:39 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.915 20:58:39 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.915 20:58:39 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.915 20:58:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.174 [2024-12-05 20:58:40.033088] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:05:32.174 [2024-12-05 20:58:40.033139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120207 ] 00:05:32.174 [2024-12-05 20:58:40.108249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.174 [2024-12-05 20:58:40.151097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.433 20:58:40 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.433 20:58:40 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:32.433 20:58:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:32.433 20:58:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:32.433 20:58:40 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.433 20:58:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.433 { 00:05:32.433 "filename": "/tmp/spdk_mem_dump.txt" 00:05:32.433 } 00:05:32.433 20:58:40 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.433 20:58:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:32.433 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:32.433 1 heaps totaling size 818.000000 MiB 00:05:32.433 size: 818.000000 MiB heap id: 0 00:05:32.433 end heaps---------- 00:05:32.433 9 mempools totaling size 603.782043 MiB 00:05:32.433 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:32.433 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:32.433 size: 100.555481 MiB name: bdev_io_1120207 00:05:32.433 size: 50.003479 MiB name: msgpool_1120207 00:05:32.433 size: 36.509338 MiB name: fsdev_io_1120207 00:05:32.433 size: 21.763794 MiB name: PDU_Pool 00:05:32.433 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:32.433 size: 4.133484 MiB name: evtpool_1120207 00:05:32.433 size: 0.026123 MiB name: Session_Pool 00:05:32.433 end mempools------- 00:05:32.433 6 memzones totaling size 4.142822 MiB 00:05:32.433 size: 1.000366 MiB name: RG_ring_0_1120207 00:05:32.433 size: 1.000366 MiB name: RG_ring_1_1120207 00:05:32.433 size: 1.000366 MiB name: RG_ring_4_1120207 00:05:32.433 size: 1.000366 MiB name: RG_ring_5_1120207 00:05:32.433 size: 0.125366 MiB name: RG_ring_2_1120207 00:05:32.433 size: 0.015991 MiB name: RG_ring_3_1120207 00:05:32.433 end memzones------- 00:05:32.433 20:58:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:32.433 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:32.433 list of free elements. size: 10.852478 MiB 00:05:32.433 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:32.433 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:32.433 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:32.433 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:32.433 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:32.433 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:32.433 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:32.433 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:32.433 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:32.434 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:32.434 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:32.434 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:32.434 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:32.434 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:32.434 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:32.434 list of standard malloc elements. size: 199.218628 MiB 00:05:32.434 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:32.434 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:32.434 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:32.434 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:32.434 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:32.434 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:32.434 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:32.434 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:32.434 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:32.434 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:32.434 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:32.434 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:32.434 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:32.434 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:32.434 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:32.434 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:32.434 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:32.434 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:32.434 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:32.434 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:32.434 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:32.434 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:32.434 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:32.434 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:32.434 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:32.434 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:32.434 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:32.434 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:32.434 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:32.434 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:32.434 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:32.434 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:32.434 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:32.434 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:32.434 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:32.434 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:32.434 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:32.434 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:32.434 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:32.434 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:32.434 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:32.434 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:32.434 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:32.434 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:32.434 list of memzone associated elements. size: 607.928894 MiB 00:05:32.434 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:32.434 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:32.434 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:32.434 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:32.434 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:32.434 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1120207_0 00:05:32.434 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:32.434 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1120207_0 00:05:32.434 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:32.434 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1120207_0 00:05:32.434 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:32.434 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:32.434 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:32.434 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:32.434 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:32.434 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1120207_0 00:05:32.434 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:32.434 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1120207 00:05:32.434 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:32.434 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1120207 00:05:32.434 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:32.434 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:32.434 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:32.434 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:32.434 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:32.434 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:32.434 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:32.434 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:32.434 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:32.434 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1120207 00:05:32.434 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:32.434 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1120207 00:05:32.434 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:32.434 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1120207 00:05:32.434 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:32.434 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1120207 00:05:32.434 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:32.434 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1120207 00:05:32.434 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:32.434 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1120207 00:05:32.434 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:32.434 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:32.434 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:32.434 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:32.434 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:32.434 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:32.434 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:32.434 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1120207 00:05:32.434 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:32.434 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1120207 00:05:32.434 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:32.434 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:32.434 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:32.434 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:32.434 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:32.434 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1120207 00:05:32.434 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:32.434 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:32.434 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:32.434 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1120207 00:05:32.434 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:32.434 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1120207 00:05:32.434 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:32.434 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1120207 00:05:32.434 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:32.434 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:32.434 20:58:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:32.434 20:58:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1120207 00:05:32.434 20:58:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1120207 ']' 00:05:32.434 20:58:40 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1120207 00:05:32.434 20:58:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:32.434 20:58:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.434 20:58:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1120207 00:05:32.434 20:58:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.434 20:58:40 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.434 20:58:40 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1120207' 00:05:32.434 killing process with pid 1120207 00:05:32.693 20:58:40 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1120207 00:05:32.693 20:58:40 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1120207 00:05:32.951 00:05:32.951 real 0m1.028s 00:05:32.951 user 0m0.974s 00:05:32.951 sys 0m0.415s 00:05:32.951 20:58:40 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.951 20:58:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.951 ************************************ 00:05:32.951 END TEST dpdk_mem_utility 00:05:32.951 ************************************ 00:05:32.951 20:58:40 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:32.951 20:58:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.951 20:58:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.951 20:58:40 -- common/autotest_common.sh@10 -- # set +x 00:05:32.951 ************************************ 00:05:32.951 START TEST event 00:05:32.951 ************************************ 00:05:32.951 20:58:40 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:32.951 * Looking for test storage... 00:05:32.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:32.951 20:58:40 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:32.951 20:58:40 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:32.951 20:58:40 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.210 20:58:41 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.210 20:58:41 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.210 20:58:41 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.210 20:58:41 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.210 20:58:41 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.210 20:58:41 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.210 20:58:41 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.210 20:58:41 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.210 20:58:41 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.210 20:58:41 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.210 20:58:41 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.210 20:58:41 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.210 20:58:41 event -- scripts/common.sh@344 -- # case "$op" in 00:05:33.210 20:58:41 event -- scripts/common.sh@345 -- # : 1 00:05:33.210 20:58:41 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.210 20:58:41 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.210 20:58:41 event -- scripts/common.sh@365 -- # decimal 1 00:05:33.210 20:58:41 event -- scripts/common.sh@353 -- # local d=1 00:05:33.210 20:58:41 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.210 20:58:41 event -- scripts/common.sh@355 -- # echo 1 00:05:33.210 20:58:41 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.210 20:58:41 event -- scripts/common.sh@366 -- # decimal 2 00:05:33.210 20:58:41 event -- scripts/common.sh@353 -- # local d=2 00:05:33.210 20:58:41 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.210 20:58:41 event -- scripts/common.sh@355 -- # echo 2 00:05:33.210 20:58:41 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.210 20:58:41 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.210 20:58:41 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.210 20:58:41 event -- scripts/common.sh@368 -- # return 0 00:05:33.210 20:58:41 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.210 20:58:41 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:33.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.210 --rc genhtml_branch_coverage=1 00:05:33.210 --rc genhtml_function_coverage=1 00:05:33.210 --rc genhtml_legend=1 00:05:33.210 --rc geninfo_all_blocks=1 00:05:33.210 --rc geninfo_unexecuted_blocks=1 00:05:33.210 00:05:33.210 ' 00:05:33.210 20:58:41 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:33.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.210 --rc genhtml_branch_coverage=1 00:05:33.210 --rc genhtml_function_coverage=1 00:05:33.210 --rc genhtml_legend=1 00:05:33.210 --rc geninfo_all_blocks=1 00:05:33.210 --rc geninfo_unexecuted_blocks=1 00:05:33.210 00:05:33.210 ' 00:05:33.210 20:58:41 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:33.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.210 --rc genhtml_branch_coverage=1 00:05:33.210 --rc genhtml_function_coverage=1 00:05:33.210 --rc genhtml_legend=1 00:05:33.210 --rc geninfo_all_blocks=1 00:05:33.210 --rc geninfo_unexecuted_blocks=1 00:05:33.210 00:05:33.210 ' 00:05:33.210 20:58:41 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:33.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.210 --rc genhtml_branch_coverage=1 00:05:33.210 --rc genhtml_function_coverage=1 00:05:33.210 --rc genhtml_legend=1 00:05:33.210 --rc geninfo_all_blocks=1 00:05:33.210 --rc geninfo_unexecuted_blocks=1 00:05:33.210 00:05:33.210 ' 00:05:33.210 20:58:41 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:33.210 20:58:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:33.210 20:58:41 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:33.210 20:58:41 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:33.210 20:58:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.210 20:58:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.210 ************************************ 00:05:33.210 START TEST event_perf 00:05:33.210 ************************************ 00:05:33.210 20:58:41 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:33.210 Running I/O for 1 seconds...[2024-12-05 20:58:41.136382] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:05:33.210 [2024-12-05 20:58:41.136450] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120438 ] 00:05:33.210 [2024-12-05 20:58:41.212872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:33.210 [2024-12-05 20:58:41.256046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.210 [2024-12-05 20:58:41.256155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.210 [2024-12-05 20:58:41.256239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.210 [2024-12-05 20:58:41.256239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.584 Running I/O for 1 seconds... 00:05:34.584 lcore 0: 208208 00:05:34.584 lcore 1: 208207 00:05:34.584 lcore 2: 208208 00:05:34.584 lcore 3: 208208 00:05:34.584 done. 00:05:34.584 00:05:34.584 real 0m1.181s 00:05:34.584 user 0m4.098s 00:05:34.584 sys 0m0.080s 00:05:34.584 20:58:42 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.584 20:58:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.584 ************************************ 00:05:34.584 END TEST event_perf 00:05:34.584 ************************************ 00:05:34.584 20:58:42 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:34.584 20:58:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:34.584 20:58:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.584 20:58:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.584 ************************************ 00:05:34.584 START TEST event_reactor 00:05:34.584 ************************************ 00:05:34.584 20:58:42 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:34.584 [2024-12-05 20:58:42.390547] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:05:34.584 [2024-12-05 20:58:42.390617] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120688 ] 00:05:34.584 [2024-12-05 20:58:42.472843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.584 [2024-12-05 20:58:42.513359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.519 test_start 00:05:35.519 oneshot 00:05:35.519 tick 100 00:05:35.519 tick 100 00:05:35.519 tick 250 00:05:35.519 tick 100 00:05:35.519 tick 100 00:05:35.519 tick 250 00:05:35.519 tick 100 00:05:35.519 tick 500 00:05:35.519 tick 100 00:05:35.519 tick 100 00:05:35.519 tick 250 00:05:35.519 tick 100 00:05:35.519 tick 100 00:05:35.519 test_end 00:05:35.519 00:05:35.519 real 0m1.182s 00:05:35.519 user 0m1.093s 00:05:35.519 sys 0m0.085s 00:05:35.519 20:58:43 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.519 20:58:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:35.519 ************************************ 00:05:35.519 END TEST event_reactor 00:05:35.519 ************************************ 00:05:35.519 20:58:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:35.519 20:58:43 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:35.519 20:58:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.519 20:58:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.519 ************************************ 00:05:35.519 START TEST event_reactor_perf 00:05:35.519 ************************************ 00:05:35.519 20:58:43 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:35.778 [2024-12-05 20:58:43.643355] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:05:35.778 [2024-12-05 20:58:43.643428] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120937 ] 00:05:35.778 [2024-12-05 20:58:43.723146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.778 [2024-12-05 20:58:43.765235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.714 test_start 00:05:36.714 test_end 00:05:36.714 Performance: 516203 events per second 00:05:36.714 00:05:36.714 real 0m1.180s 00:05:36.714 user 0m1.096s 00:05:36.714 sys 0m0.080s 00:05:36.714 20:58:44 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.714 20:58:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.714 ************************************ 00:05:36.714 END TEST event_reactor_perf 00:05:36.714 ************************************ 00:05:36.973 20:58:44 event -- event/event.sh@49 -- # uname -s 00:05:36.973 20:58:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:36.973 20:58:44 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:36.973 20:58:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.973 20:58:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.973 20:58:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.973 ************************************ 00:05:36.973 START TEST event_scheduler 00:05:36.973 ************************************ 00:05:36.973 20:58:44 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:36.973 * Looking for test storage... 00:05:36.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:36.973 20:58:44 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.973 20:58:44 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.973 20:58:44 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.973 20:58:45 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.973 20:58:45 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.974 20:58:45 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.974 20:58:45 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:36.974 20:58:45 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.974 20:58:45 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.974 --rc genhtml_branch_coverage=1 00:05:36.974 --rc genhtml_function_coverage=1 00:05:36.974 --rc genhtml_legend=1 00:05:36.974 --rc geninfo_all_blocks=1 00:05:36.974 --rc geninfo_unexecuted_blocks=1 00:05:36.974 00:05:36.974 ' 00:05:36.974 20:58:45 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.974 --rc genhtml_branch_coverage=1 00:05:36.974 --rc genhtml_function_coverage=1 00:05:36.974 --rc genhtml_legend=1 00:05:36.974 --rc geninfo_all_blocks=1 00:05:36.974 --rc geninfo_unexecuted_blocks=1 00:05:36.974 00:05:36.974 ' 00:05:36.974 20:58:45 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.974 --rc genhtml_branch_coverage=1 00:05:36.974 --rc genhtml_function_coverage=1 00:05:36.974 --rc genhtml_legend=1 00:05:36.974 --rc geninfo_all_blocks=1 00:05:36.974 --rc geninfo_unexecuted_blocks=1 00:05:36.974 00:05:36.974 ' 00:05:36.974 20:58:45 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.974 --rc genhtml_branch_coverage=1 00:05:36.974 --rc genhtml_function_coverage=1 00:05:36.974 --rc genhtml_legend=1 00:05:36.974 --rc geninfo_all_blocks=1 00:05:36.974 --rc geninfo_unexecuted_blocks=1 00:05:36.974 00:05:36.974 ' 00:05:36.974 20:58:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:36.974 20:58:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1121225 00:05:36.974 20:58:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:36.974 20:58:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.974 20:58:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1121225 00:05:36.974 20:58:45 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1121225 ']' 00:05:36.974 20:58:45 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.974 20:58:45 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.974 20:58:45 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.974 20:58:45 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.974 20:58:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.232 [2024-12-05 20:58:45.098734] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:05:37.232 [2024-12-05 20:58:45.098783] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1121225 ] 00:05:37.232 [2024-12-05 20:58:45.174840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:37.232 [2024-12-05 20:58:45.216676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.232 [2024-12-05 20:58:45.216786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.232 [2024-12-05 20:58:45.216871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.232 [2024-12-05 20:58:45.216871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.232 20:58:45 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.232 20:58:45 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:37.232 20:58:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:37.232 20:58:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.232 20:58:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.232 [2024-12-05 20:58:45.257454] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:37.232 [2024-12-05 20:58:45.257473] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:37.232 [2024-12-05 20:58:45.257482] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:37.232 [2024-12-05 20:58:45.257487] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:37.232 [2024-12-05 20:58:45.257492] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:37.232 20:58:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.232 20:58:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:37.232 20:58:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.232 20:58:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.233 [2024-12-05 20:58:45.331933] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:37.233 20:58:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.233 20:58:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:37.233 20:58:45 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.233 20:58:45 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.233 20:58:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.491 ************************************ 00:05:37.491 START TEST scheduler_create_thread 00:05:37.491 ************************************ 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.491 2 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.491 3 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.491 4 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.491 5 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.491 6 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.491 7 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.491 8 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.491 9 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.491 10 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.491 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.492 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.492 20:58:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:37.492 20:58:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:37.492 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.492 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.058 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.058 20:58:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:38.058 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.058 20:58:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.433 20:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.433 20:58:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:39.433 20:58:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:39.433 20:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.433 20:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.370 20:58:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.370 00:05:40.370 real 0m3.102s 00:05:40.370 user 0m0.023s 00:05:40.370 sys 0m0.006s 00:05:40.370 20:58:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.370 20:58:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.370 ************************************ 00:05:40.370 END TEST scheduler_create_thread 00:05:40.370 ************************************ 00:05:40.630 20:58:48 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:40.630 20:58:48 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1121225 00:05:40.630 20:58:48 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1121225 ']' 00:05:40.630 20:58:48 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1121225 00:05:40.630 20:58:48 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:40.630 20:58:48 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.630 20:58:48 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1121225 00:05:40.630 20:58:48 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:40.630 20:58:48 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:40.630 20:58:48 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1121225' 00:05:40.630 killing process with pid 1121225 00:05:40.630 20:58:48 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1121225 00:05:40.630 20:58:48 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1121225 00:05:40.890 [2024-12-05 20:58:48.851049] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:41.149 00:05:41.149 real 0m4.161s 00:05:41.149 user 0m6.655s 00:05:41.149 sys 0m0.369s 00:05:41.149 20:58:49 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.149 20:58:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.149 ************************************ 00:05:41.149 END TEST event_scheduler 00:05:41.149 ************************************ 00:05:41.149 20:58:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:41.149 20:58:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:41.149 20:58:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.149 20:58:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.149 20:58:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.149 ************************************ 00:05:41.149 START TEST app_repeat 00:05:41.149 ************************************ 00:05:41.149 20:58:49 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:41.149 20:58:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.149 20:58:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.149 20:58:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:41.149 20:58:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.149 20:58:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:41.149 20:58:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:41.149 20:58:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:41.149 20:58:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1121969 00:05:41.149 20:58:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.149 20:58:49 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:41.149 20:58:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1121969' 00:05:41.149 Process app_repeat pid: 1121969 00:05:41.149 20:58:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:41.149 20:58:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:41.149 spdk_app_start Round 0 00:05:41.149 20:58:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1121969 /var/tmp/spdk-nbd.sock 00:05:41.149 20:58:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1121969 ']' 00:05:41.149 20:58:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.149 20:58:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.150 20:58:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.150 20:58:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.150 20:58:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.150 [2024-12-05 20:58:49.146446] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:05:41.150 [2024-12-05 20:58:49.146492] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1121969 ] 00:05:41.150 [2024-12-05 20:58:49.219543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.408 [2024-12-05 20:58:49.263788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.408 [2024-12-05 20:58:49.263790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.408 20:58:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.408 20:58:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:41.408 20:58:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.667 Malloc0 00:05:41.667 20:58:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.667 Malloc1 00:05:41.667 20:58:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.667 20:58:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.667 20:58:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.667 20:58:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.667 20:58:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.667 20:58:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.667 20:58:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.667 20:58:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.667 20:58:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.667 20:58:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.667 20:58:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.667 20:58:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.667 20:58:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:41.667 20:58:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.667 20:58:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.667 20:58:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.926 /dev/nbd0 00:05:41.926 20:58:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.926 20:58:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.926 20:58:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:41.926 20:58:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.926 20:58:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.926 20:58:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.926 20:58:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:41.926 20:58:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.926 20:58:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.926 20:58:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.926 20:58:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.926 1+0 records in 00:05:41.926 1+0 records out 00:05:41.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193791 s, 21.1 MB/s 00:05:41.926 20:58:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.926 20:58:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.926 20:58:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.926 20:58:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.926 20:58:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.926 20:58:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.926 20:58:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.926 20:58:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:42.185 /dev/nbd1 00:05:42.185 20:58:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:42.185 20:58:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:42.185 20:58:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:42.185 20:58:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:42.185 20:58:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:42.185 20:58:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:42.185 20:58:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:42.185 20:58:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:42.185 20:58:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:42.185 20:58:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:42.185 20:58:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.185 1+0 records in 00:05:42.186 1+0 records out 00:05:42.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226608 s, 18.1 MB/s 00:05:42.186 20:58:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.186 20:58:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:42.186 20:58:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.186 20:58:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:42.186 20:58:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:42.186 20:58:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.186 20:58:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.186 20:58:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.186 20:58:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.186 20:58:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.444 { 00:05:42.444 "nbd_device": "/dev/nbd0", 00:05:42.444 "bdev_name": "Malloc0" 00:05:42.444 }, 00:05:42.444 { 00:05:42.444 "nbd_device": "/dev/nbd1", 00:05:42.444 "bdev_name": "Malloc1" 00:05:42.444 } 00:05:42.444 ]' 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.444 { 00:05:42.444 "nbd_device": "/dev/nbd0", 00:05:42.444 "bdev_name": "Malloc0" 00:05:42.444 }, 00:05:42.444 { 00:05:42.444 "nbd_device": "/dev/nbd1", 00:05:42.444 "bdev_name": "Malloc1" 00:05:42.444 } 00:05:42.444 ]' 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.444 /dev/nbd1' 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.444 /dev/nbd1' 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.444 256+0 records in 00:05:42.444 256+0 records out 00:05:42.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106454 s, 98.5 MB/s 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.444 256+0 records in 00:05:42.444 256+0 records out 00:05:42.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138187 s, 75.9 MB/s 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.444 256+0 records in 00:05:42.444 256+0 records out 00:05:42.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148698 s, 70.5 MB/s 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.444 20:58:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.703 20:58:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.961 20:58:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.961 20:58:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.961 20:58:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.961 20:58:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.961 20:58:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.961 20:58:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.961 20:58:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.961 20:58:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.961 20:58:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.961 20:58:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.961 20:58:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.219 20:58:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.219 20:58:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.219 20:58:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.219 20:58:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.219 20:58:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.219 20:58:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.219 20:58:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:43.219 20:58:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.219 20:58:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.219 20:58:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.219 20:58:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.219 20:58:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.219 20:58:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:43.478 20:58:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:43.737 [2024-12-05 20:58:51.618311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.737 [2024-12-05 20:58:51.655298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.737 [2024-12-05 20:58:51.655299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.737 [2024-12-05 20:58:51.696066] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.737 [2024-12-05 20:58:51.696104] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.021 20:58:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.021 20:58:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:47.021 spdk_app_start Round 1 00:05:47.021 20:58:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1121969 /var/tmp/spdk-nbd.sock 00:05:47.021 20:58:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1121969 ']' 00:05:47.021 20:58:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.021 20:58:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.021 20:58:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.021 20:58:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.021 20:58:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.021 20:58:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.021 20:58:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:47.021 20:58:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.021 Malloc0 00:05:47.021 20:58:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.021 Malloc1 00:05:47.021 20:58:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.021 20:58:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.021 20:58:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.021 20:58:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:47.021 20:58:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.021 20:58:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:47.021 20:58:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.021 20:58:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.021 20:58:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.021 20:58:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:47.021 20:58:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.021 20:58:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:47.021 20:58:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:47.021 20:58:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:47.021 20:58:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.021 20:58:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:47.279 /dev/nbd0 00:05:47.279 20:58:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:47.279 20:58:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:47.279 20:58:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:47.279 20:58:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:47.279 20:58:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:47.279 20:58:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:47.279 20:58:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:47.279 20:58:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:47.279 20:58:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:47.279 20:58:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:47.279 20:58:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.279 1+0 records in 00:05:47.279 1+0 records out 00:05:47.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201603 s, 20.3 MB/s 00:05:47.279 20:58:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.279 20:58:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:47.279 20:58:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.279 20:58:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:47.279 20:58:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:47.279 20:58:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.279 20:58:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.279 20:58:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:47.550 /dev/nbd1 00:05:47.550 20:58:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:47.550 20:58:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:47.550 20:58:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:47.550 20:58:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:47.550 20:58:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:47.550 20:58:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:47.550 20:58:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:47.550 20:58:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:47.550 20:58:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:47.550 20:58:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:47.550 20:58:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.550 1+0 records in 00:05:47.550 1+0 records out 00:05:47.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243864 s, 16.8 MB/s 00:05:47.550 20:58:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.550 20:58:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:47.550 20:58:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.550 20:58:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:47.550 20:58:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:47.550 20:58:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.550 20:58:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.550 20:58:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.550 20:58:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.550 20:58:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.860 { 00:05:47.860 "nbd_device": "/dev/nbd0", 00:05:47.860 "bdev_name": "Malloc0" 00:05:47.860 }, 00:05:47.860 { 00:05:47.860 "nbd_device": "/dev/nbd1", 00:05:47.860 "bdev_name": "Malloc1" 00:05:47.860 } 00:05:47.860 ]' 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.860 { 00:05:47.860 "nbd_device": "/dev/nbd0", 00:05:47.860 "bdev_name": "Malloc0" 00:05:47.860 }, 00:05:47.860 { 00:05:47.860 "nbd_device": "/dev/nbd1", 00:05:47.860 "bdev_name": "Malloc1" 00:05:47.860 } 00:05:47.860 ]' 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.860 /dev/nbd1' 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.860 /dev/nbd1' 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.860 256+0 records in 00:05:47.860 256+0 records out 00:05:47.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106484 s, 98.5 MB/s 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.860 256+0 records in 00:05:47.860 256+0 records out 00:05:47.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01365 s, 76.8 MB/s 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.860 256+0 records in 00:05:47.860 256+0 records out 00:05:47.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155493 s, 67.4 MB/s 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.860 20:58:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:48.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:48.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:48.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:48.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:48.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:48.402 20:58:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:48.402 20:58:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:48.402 20:58:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:48.402 20:58:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.402 20:58:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.402 20:58:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:48.402 20:58:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.402 20:58:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.402 20:58:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.402 20:58:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.402 20:58:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.402 20:58:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.402 20:58:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.402 20:58:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.661 20:58:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:48.661 20:58:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:48.661 20:58:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.661 20:58:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:48.661 20:58:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:48.661 20:58:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:48.661 20:58:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:48.661 20:58:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:48.661 20:58:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:48.661 20:58:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.920 20:58:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.920 [2024-12-05 20:58:56.927734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.920 [2024-12-05 20:58:56.964413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.920 [2024-12-05 20:58:56.964416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.920 [2024-12-05 20:58:57.005992] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.920 [2024-12-05 20:58:57.006032] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:52.206 20:58:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:52.206 20:58:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:52.206 spdk_app_start Round 2 00:05:52.206 20:58:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1121969 /var/tmp/spdk-nbd.sock 00:05:52.206 20:58:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1121969 ']' 00:05:52.206 20:58:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.206 20:58:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.206 20:58:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.206 20:58:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.206 20:58:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.206 20:58:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.206 20:58:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:52.206 20:58:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.206 Malloc0 00:05:52.206 20:59:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.465 Malloc1 00:05:52.465 20:59:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.465 20:59:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.465 20:59:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.465 20:59:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.465 20:59:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.465 20:59:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.465 20:59:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.465 20:59:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.465 20:59:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.465 20:59:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.465 20:59:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.465 20:59:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.465 20:59:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:52.465 20:59:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.465 20:59:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.465 20:59:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.724 /dev/nbd0 00:05:52.724 20:59:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.724 20:59:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.724 20:59:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:52.724 20:59:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:52.724 20:59:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.724 20:59:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.724 20:59:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:52.724 20:59:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:52.724 20:59:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.724 20:59:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.724 20:59:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.724 1+0 records in 00:05:52.724 1+0 records out 00:05:52.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227437 s, 18.0 MB/s 00:05:52.724 20:59:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.724 20:59:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:52.724 20:59:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.724 20:59:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.724 20:59:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:52.724 20:59:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.724 20:59:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.724 20:59:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.983 /dev/nbd1 00:05:52.983 20:59:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.983 20:59:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.983 20:59:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:52.983 20:59:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:52.983 20:59:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.983 20:59:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.983 20:59:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:52.983 20:59:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:52.983 20:59:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.983 20:59:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.983 20:59:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.983 1+0 records in 00:05:52.983 1+0 records out 00:05:52.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226626 s, 18.1 MB/s 00:05:52.983 20:59:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.983 20:59:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:52.983 20:59:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.983 20:59:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.983 20:59:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:52.983 20:59:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.983 20:59:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.983 20:59:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.983 20:59:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.983 20:59:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.242 { 00:05:53.242 "nbd_device": "/dev/nbd0", 00:05:53.242 "bdev_name": "Malloc0" 00:05:53.242 }, 00:05:53.242 { 00:05:53.242 "nbd_device": "/dev/nbd1", 00:05:53.242 "bdev_name": "Malloc1" 00:05:53.242 } 00:05:53.242 ]' 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.242 { 00:05:53.242 "nbd_device": "/dev/nbd0", 00:05:53.242 "bdev_name": "Malloc0" 00:05:53.242 }, 00:05:53.242 { 00:05:53.242 "nbd_device": "/dev/nbd1", 00:05:53.242 "bdev_name": "Malloc1" 00:05:53.242 } 00:05:53.242 ]' 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.242 /dev/nbd1' 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.242 /dev/nbd1' 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.242 256+0 records in 00:05:53.242 256+0 records out 00:05:53.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106751 s, 98.2 MB/s 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.242 256+0 records in 00:05:53.242 256+0 records out 00:05:53.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139621 s, 75.1 MB/s 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.242 256+0 records in 00:05:53.242 256+0 records out 00:05:53.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151198 s, 69.4 MB/s 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.242 20:59:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.243 20:59:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.243 20:59:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.243 20:59:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.501 20:59:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.502 20:59:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.502 20:59:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.502 20:59:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.502 20:59:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.502 20:59:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.502 20:59:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.502 20:59:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.502 20:59:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.502 20:59:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.760 20:59:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.760 20:59:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.760 20:59:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.760 20:59:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.760 20:59:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.760 20:59:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.760 20:59:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.760 20:59:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.760 20:59:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.760 20:59:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.760 20:59:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.760 20:59:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.760 20:59:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.760 20:59:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.019 20:59:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.019 20:59:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.019 20:59:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.019 20:59:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:54.019 20:59:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.019 20:59:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.019 20:59:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.019 20:59:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.019 20:59:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.019 20:59:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.019 20:59:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.278 [2024-12-05 20:59:02.255884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.278 [2024-12-05 20:59:02.292522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.278 [2024-12-05 20:59:02.292523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.278 [2024-12-05 20:59:02.333338] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.278 [2024-12-05 20:59:02.333381] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.567 20:59:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1121969 /var/tmp/spdk-nbd.sock 00:05:57.567 20:59:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1121969 ']' 00:05:57.567 20:59:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.567 20:59:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.567 20:59:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.567 20:59:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.567 20:59:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.567 20:59:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.567 20:59:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:57.567 20:59:05 event.app_repeat -- event/event.sh@39 -- # killprocess 1121969 00:05:57.567 20:59:05 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1121969 ']' 00:05:57.567 20:59:05 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1121969 00:05:57.567 20:59:05 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:57.567 20:59:05 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.567 20:59:05 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1121969 00:05:57.567 20:59:05 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.567 20:59:05 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.567 20:59:05 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1121969' 00:05:57.568 killing process with pid 1121969 00:05:57.568 20:59:05 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1121969 00:05:57.568 20:59:05 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1121969 00:05:57.568 spdk_app_start is called in Round 0. 00:05:57.568 Shutdown signal received, stop current app iteration 00:05:57.568 Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 reinitialization... 00:05:57.568 spdk_app_start is called in Round 1. 00:05:57.568 Shutdown signal received, stop current app iteration 00:05:57.568 Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 reinitialization... 00:05:57.568 spdk_app_start is called in Round 2. 00:05:57.568 Shutdown signal received, stop current app iteration 00:05:57.568 Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 reinitialization... 00:05:57.568 spdk_app_start is called in Round 3. 00:05:57.568 Shutdown signal received, stop current app iteration 00:05:57.568 20:59:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:57.568 20:59:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:57.568 00:05:57.568 real 0m16.391s 00:05:57.568 user 0m36.030s 00:05:57.568 sys 0m2.528s 00:05:57.568 20:59:05 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.568 20:59:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.568 ************************************ 00:05:57.568 END TEST app_repeat 00:05:57.568 ************************************ 00:05:57.568 20:59:05 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:57.568 20:59:05 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:57.568 20:59:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.568 20:59:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.568 20:59:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.568 ************************************ 00:05:57.568 START TEST cpu_locks 00:05:57.568 ************************************ 00:05:57.568 20:59:05 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:57.568 * Looking for test storage... 00:05:57.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:57.568 20:59:05 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.568 20:59:05 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.568 20:59:05 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.826 20:59:05 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.826 20:59:05 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:57.826 20:59:05 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.826 20:59:05 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.826 --rc genhtml_branch_coverage=1 00:05:57.826 --rc genhtml_function_coverage=1 00:05:57.826 --rc genhtml_legend=1 00:05:57.826 --rc geninfo_all_blocks=1 00:05:57.826 --rc geninfo_unexecuted_blocks=1 00:05:57.826 00:05:57.826 ' 00:05:57.826 20:59:05 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.826 --rc genhtml_branch_coverage=1 00:05:57.826 --rc genhtml_function_coverage=1 00:05:57.826 --rc genhtml_legend=1 00:05:57.826 --rc geninfo_all_blocks=1 00:05:57.826 --rc geninfo_unexecuted_blocks=1 00:05:57.826 00:05:57.826 ' 00:05:57.826 20:59:05 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.826 --rc genhtml_branch_coverage=1 00:05:57.826 --rc genhtml_function_coverage=1 00:05:57.826 --rc genhtml_legend=1 00:05:57.826 --rc geninfo_all_blocks=1 00:05:57.826 --rc geninfo_unexecuted_blocks=1 00:05:57.826 00:05:57.826 ' 00:05:57.826 20:59:05 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.826 --rc genhtml_branch_coverage=1 00:05:57.826 --rc genhtml_function_coverage=1 00:05:57.826 --rc genhtml_legend=1 00:05:57.826 --rc geninfo_all_blocks=1 00:05:57.826 --rc geninfo_unexecuted_blocks=1 00:05:57.826 00:05:57.826 ' 00:05:57.826 20:59:05 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:57.826 20:59:05 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:57.826 20:59:05 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:57.826 20:59:05 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:57.826 20:59:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.826 20:59:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.826 20:59:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.826 ************************************ 00:05:57.826 START TEST default_locks 00:05:57.826 ************************************ 00:05:57.826 20:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:57.826 20:59:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1124968 00:05:57.826 20:59:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.826 20:59:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1124968 00:05:57.826 20:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1124968 ']' 00:05:57.826 20:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.826 20:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.826 20:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.826 20:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.826 20:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.826 [2024-12-05 20:59:05.830193] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:05:57.826 [2024-12-05 20:59:05.830232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1124968 ] 00:05:57.826 [2024-12-05 20:59:05.904253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.085 [2024-12-05 20:59:05.947146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.085 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.085 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:58.085 20:59:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1124968 00:05:58.085 20:59:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1124968 00:05:58.085 20:59:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.343 lslocks: write error 00:05:58.343 20:59:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1124968 00:05:58.343 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1124968 ']' 00:05:58.343 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1124968 00:05:58.343 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:58.343 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.343 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1124968 00:05:58.343 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.343 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.343 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1124968' 00:05:58.343 killing process with pid 1124968 00:05:58.343 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1124968 00:05:58.343 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1124968 00:05:58.910 20:59:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1124968 00:05:58.910 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:58.910 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1124968 00:05:58.910 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:58.910 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.910 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:58.910 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.910 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1124968 00:05:58.910 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1124968 ']' 00:05:58.910 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.910 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.910 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.910 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.910 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1124968) - No such process 00:05:58.910 ERROR: process (pid: 1124968) is no longer running 00:05:58.910 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.911 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:58.911 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:58.911 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:58.911 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:58.911 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:58.911 20:59:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:58.911 20:59:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:58.911 20:59:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:58.911 20:59:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:58.911 00:05:58.911 real 0m0.976s 00:05:58.911 user 0m0.925s 00:05:58.911 sys 0m0.454s 00:05:58.911 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.911 20:59:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.911 ************************************ 00:05:58.911 END TEST default_locks 00:05:58.911 ************************************ 00:05:58.911 20:59:06 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:58.911 20:59:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.911 20:59:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.911 20:59:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.911 ************************************ 00:05:58.911 START TEST default_locks_via_rpc 00:05:58.911 ************************************ 00:05:58.911 20:59:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:58.911 20:59:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1125226 00:05:58.911 20:59:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1125226 00:05:58.911 20:59:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.911 20:59:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1125226 ']' 00:05:58.911 20:59:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.911 20:59:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.911 20:59:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.911 20:59:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.911 20:59:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.911 [2024-12-05 20:59:06.877887] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:05:58.911 [2024-12-05 20:59:06.877926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125226 ] 00:05:58.911 [2024-12-05 20:59:06.950978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.911 [2024-12-05 20:59:06.992754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.170 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.170 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.170 20:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:59.170 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.170 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.170 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.170 20:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:59.170 20:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:59.170 20:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:59.170 20:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:59.170 20:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:59.170 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.170 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.170 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.170 20:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1125226 00:05:59.170 20:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1125226 00:05:59.170 20:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.483 20:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1125226 00:05:59.483 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1125226 ']' 00:05:59.483 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1125226 00:05:59.483 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:59.483 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.483 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1125226 00:05:59.742 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.742 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.742 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1125226' 00:05:59.742 killing process with pid 1125226 00:05:59.742 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1125226 00:05:59.742 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1125226 00:06:00.000 00:06:00.000 real 0m1.070s 00:06:00.000 user 0m1.023s 00:06:00.000 sys 0m0.492s 00:06:00.000 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.000 20:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.000 ************************************ 00:06:00.000 END TEST default_locks_via_rpc 00:06:00.000 ************************************ 00:06:00.000 20:59:07 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:00.000 20:59:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.000 20:59:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.000 20:59:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.000 ************************************ 00:06:00.000 START TEST non_locking_app_on_locked_coremask 00:06:00.000 ************************************ 00:06:00.000 20:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:00.000 20:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1125480 00:06:00.000 20:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1125480 /var/tmp/spdk.sock 00:06:00.000 20:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.000 20:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1125480 ']' 00:06:00.000 20:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.000 20:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.000 20:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.000 20:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.000 20:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.000 [2024-12-05 20:59:08.019889] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:06:00.000 [2024-12-05 20:59:08.019932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125480 ] 00:06:00.000 [2024-12-05 20:59:08.092105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.259 [2024-12-05 20:59:08.135322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.259 20:59:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.260 20:59:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:00.260 20:59:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1125489 00:06:00.260 20:59:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1125489 /var/tmp/spdk2.sock 00:06:00.260 20:59:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:00.260 20:59:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1125489 ']' 00:06:00.260 20:59:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.260 20:59:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.260 20:59:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.260 20:59:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.260 20:59:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.518 [2024-12-05 20:59:08.395301] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:06:00.518 [2024-12-05 20:59:08.395348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125489 ] 00:06:00.519 [2024-12-05 20:59:08.477180] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.519 [2024-12-05 20:59:08.477202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.519 [2024-12-05 20:59:08.560802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.453 20:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.453 20:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:01.453 20:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1125480 00:06:01.453 20:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1125480 00:06:01.453 20:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.712 lslocks: write error 00:06:01.712 20:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1125480 00:06:01.712 20:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1125480 ']' 00:06:01.712 20:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1125480 00:06:01.712 20:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:01.712 20:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.712 20:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1125480 00:06:01.971 20:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.971 20:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.971 20:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1125480' 00:06:01.971 killing process with pid 1125480 00:06:01.971 20:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1125480 00:06:01.971 20:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1125480 00:06:02.537 20:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1125489 00:06:02.537 20:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1125489 ']' 00:06:02.537 20:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1125489 00:06:02.537 20:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:02.537 20:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.537 20:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1125489 00:06:02.537 20:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.537 20:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.537 20:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1125489' 00:06:02.537 killing process with pid 1125489 00:06:02.537 20:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1125489 00:06:02.537 20:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1125489 00:06:02.794 00:06:02.794 real 0m2.845s 00:06:02.794 user 0m2.988s 00:06:02.794 sys 0m0.968s 00:06:02.794 20:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.794 20:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.794 ************************************ 00:06:02.794 END TEST non_locking_app_on_locked_coremask 00:06:02.794 ************************************ 00:06:02.794 20:59:10 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:02.794 20:59:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.794 20:59:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.794 20:59:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.794 ************************************ 00:06:02.794 START TEST locking_app_on_unlocked_coremask 00:06:02.794 ************************************ 00:06:02.794 20:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:02.794 20:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1125980 00:06:02.794 20:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:02.794 20:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1125980 /var/tmp/spdk.sock 00:06:02.794 20:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1125980 ']' 00:06:02.794 20:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.794 20:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.794 20:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.794 20:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.794 20:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.052 [2024-12-05 20:59:10.925532] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:06:03.052 [2024-12-05 20:59:10.925566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125980 ] 00:06:03.052 [2024-12-05 20:59:10.999219] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.052 [2024-12-05 20:59:10.999241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.052 [2024-12-05 20:59:11.038654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.309 20:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.309 20:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.309 20:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1125988 00:06:03.309 20:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1125988 /var/tmp/spdk2.sock 00:06:03.309 20:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:03.309 20:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1125988 ']' 00:06:03.309 20:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.309 20:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.309 20:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.309 20:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.309 20:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.309 [2024-12-05 20:59:11.312999] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:06:03.309 [2024-12-05 20:59:11.313046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125988 ] 00:06:03.309 [2024-12-05 20:59:11.397218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.567 [2024-12-05 20:59:11.478046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.133 20:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.133 20:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:04.133 20:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1125988 00:06:04.133 20:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1125988 00:06:04.133 20:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.700 lslocks: write error 00:06:04.700 20:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1125980 00:06:04.700 20:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1125980 ']' 00:06:04.700 20:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1125980 00:06:04.700 20:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:04.700 20:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.700 20:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1125980 00:06:04.700 20:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.700 20:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.700 20:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1125980' 00:06:04.700 killing process with pid 1125980 00:06:04.700 20:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1125980 00:06:04.700 20:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1125980 00:06:05.267 20:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1125988 00:06:05.267 20:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1125988 ']' 00:06:05.267 20:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1125988 00:06:05.267 20:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.267 20:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.267 20:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1125988 00:06:05.267 20:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.267 20:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.267 20:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1125988' 00:06:05.267 killing process with pid 1125988 00:06:05.267 20:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1125988 00:06:05.267 20:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1125988 00:06:05.526 00:06:05.526 real 0m2.742s 00:06:05.526 user 0m2.862s 00:06:05.526 sys 0m0.927s 00:06:05.526 20:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.526 20:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.526 ************************************ 00:06:05.526 END TEST locking_app_on_unlocked_coremask 00:06:05.526 ************************************ 00:06:05.786 20:59:13 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:05.786 20:59:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.786 20:59:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.786 20:59:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.786 ************************************ 00:06:05.786 START TEST locking_app_on_locked_coremask 00:06:05.786 ************************************ 00:06:05.786 20:59:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:05.786 20:59:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1126497 00:06:05.786 20:59:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1126497 /var/tmp/spdk.sock 00:06:05.786 20:59:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.786 20:59:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1126497 ']' 00:06:05.786 20:59:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.786 20:59:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.786 20:59:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.786 20:59:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.786 20:59:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.786 [2024-12-05 20:59:13.744814] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:06:05.786 [2024-12-05 20:59:13.744858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126497 ] 00:06:05.786 [2024-12-05 20:59:13.816594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.786 [2024-12-05 20:59:13.854671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.045 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.045 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:06.045 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1126503 00:06:06.045 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1126503 /var/tmp/spdk2.sock 00:06:06.045 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.045 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:06.045 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1126503 /var/tmp/spdk2.sock 00:06:06.045 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:06.045 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.045 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:06.045 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.045 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1126503 /var/tmp/spdk2.sock 00:06:06.045 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1126503 ']' 00:06:06.045 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.045 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.045 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.045 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.045 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.045 [2024-12-05 20:59:14.130400] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:06:06.045 [2024-12-05 20:59:14.130447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126503 ] 00:06:06.303 [2024-12-05 20:59:14.221749] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1126497 has claimed it. 00:06:06.303 [2024-12-05 20:59:14.221787] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1126503) - No such process 00:06:06.869 ERROR: process (pid: 1126503) is no longer running 00:06:06.869 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.869 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:06.870 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:06.870 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.870 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.870 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.870 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1126497 00:06:06.870 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1126497 00:06:06.870 20:59:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.433 lslocks: write error 00:06:07.433 20:59:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1126497 00:06:07.433 20:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1126497 ']' 00:06:07.433 20:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1126497 00:06:07.433 20:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:07.433 20:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.433 20:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1126497 00:06:07.433 20:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.433 20:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.433 20:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1126497' 00:06:07.433 killing process with pid 1126497 00:06:07.433 20:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1126497 00:06:07.433 20:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1126497 00:06:07.691 00:06:07.691 real 0m1.941s 00:06:07.691 user 0m2.062s 00:06:07.691 sys 0m0.670s 00:06:07.691 20:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.691 20:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.691 ************************************ 00:06:07.691 END TEST locking_app_on_locked_coremask 00:06:07.691 ************************************ 00:06:07.691 20:59:15 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:07.691 20:59:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.691 20:59:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.691 20:59:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.691 ************************************ 00:06:07.691 START TEST locking_overlapped_coremask 00:06:07.691 ************************************ 00:06:07.691 20:59:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:07.691 20:59:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:07.691 20:59:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1126775 00:06:07.691 20:59:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1126775 /var/tmp/spdk.sock 00:06:07.691 20:59:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1126775 ']' 00:06:07.691 20:59:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.691 20:59:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.691 20:59:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.691 20:59:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.691 20:59:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.691 [2024-12-05 20:59:15.743538] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:06:07.691 [2024-12-05 20:59:15.743576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126775 ] 00:06:07.949 [2024-12-05 20:59:15.818237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.949 [2024-12-05 20:59:15.862905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.949 [2024-12-05 20:59:15.863009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.949 [2024-12-05 20:59:15.863010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.208 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.208 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.208 20:59:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1126988 00:06:08.208 20:59:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1126988 /var/tmp/spdk2.sock 00:06:08.208 20:59:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:08.208 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:08.208 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1126988 /var/tmp/spdk2.sock 00:06:08.208 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:08.208 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.208 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:08.208 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.208 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1126988 /var/tmp/spdk2.sock 00:06:08.208 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1126988 ']' 00:06:08.208 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.208 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.208 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.208 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.208 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.208 [2024-12-05 20:59:16.132482] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:06:08.208 [2024-12-05 20:59:16.132530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126988 ] 00:06:08.208 [2024-12-05 20:59:16.223110] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1126775 has claimed it. 00:06:08.208 [2024-12-05 20:59:16.223146] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:08.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1126988) - No such process 00:06:08.776 ERROR: process (pid: 1126988) is no longer running 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1126775 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1126775 ']' 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1126775 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1126775 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1126775' 00:06:08.776 killing process with pid 1126775 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1126775 00:06:08.776 20:59:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1126775 00:06:09.035 00:06:09.035 real 0m1.423s 00:06:09.035 user 0m3.940s 00:06:09.035 sys 0m0.381s 00:06:09.035 20:59:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.035 20:59:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.035 ************************************ 00:06:09.035 END TEST locking_overlapped_coremask 00:06:09.035 ************************************ 00:06:09.293 20:59:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:09.293 20:59:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.293 20:59:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.293 20:59:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.293 ************************************ 00:06:09.293 START TEST locking_overlapped_coremask_via_rpc 00:06:09.293 ************************************ 00:06:09.293 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:09.293 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1127089 00:06:09.293 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1127089 /var/tmp/spdk.sock 00:06:09.293 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:09.293 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1127089 ']' 00:06:09.293 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.293 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.293 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.293 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.293 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.293 [2024-12-05 20:59:17.249471] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:06:09.293 [2024-12-05 20:59:17.249517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127089 ] 00:06:09.293 [2024-12-05 20:59:17.325141] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.293 [2024-12-05 20:59:17.325167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.293 [2024-12-05 20:59:17.369441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.293 [2024-12-05 20:59:17.369550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.293 [2024-12-05 20:59:17.369551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.552 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.552 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:09.552 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1127258 00:06:09.552 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1127258 /var/tmp/spdk2.sock 00:06:09.552 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:09.552 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1127258 ']' 00:06:09.552 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.552 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.552 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.552 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.552 20:59:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.552 [2024-12-05 20:59:17.630516] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:06:09.552 [2024-12-05 20:59:17.630566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127258 ] 00:06:09.810 [2024-12-05 20:59:17.721379] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.810 [2024-12-05 20:59:17.721402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.810 [2024-12-05 20:59:17.804289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.810 [2024-12-05 20:59:17.807413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.810 [2024-12-05 20:59:17.807414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:10.374 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.375 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:10.375 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:10.375 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.375 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.375 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.375 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.375 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:10.375 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.375 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:10.375 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.375 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:10.375 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.375 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.375 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.375 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.632 [2024-12-05 20:59:18.484437] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1127089 has claimed it. 00:06:10.632 request: 00:06:10.632 { 00:06:10.632 "method": "framework_enable_cpumask_locks", 00:06:10.632 "req_id": 1 00:06:10.632 } 00:06:10.632 Got JSON-RPC error response 00:06:10.632 response: 00:06:10.632 { 00:06:10.632 "code": -32603, 00:06:10.632 "message": "Failed to claim CPU core: 2" 00:06:10.632 } 00:06:10.632 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:10.632 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:10.632 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.632 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:10.632 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.632 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1127089 /var/tmp/spdk.sock 00:06:10.633 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1127089 ']' 00:06:10.633 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.633 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.633 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.633 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.633 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.633 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.633 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:10.633 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1127258 /var/tmp/spdk2.sock 00:06:10.633 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1127258 ']' 00:06:10.633 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.633 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.633 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.633 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.633 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.891 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.891 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:10.891 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:10.891 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.891 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.891 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.891 00:06:10.891 real 0m1.732s 00:06:10.891 user 0m0.847s 00:06:10.891 sys 0m0.128s 00:06:10.891 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.891 20:59:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.891 ************************************ 00:06:10.891 END TEST locking_overlapped_coremask_via_rpc 00:06:10.891 ************************************ 00:06:10.891 20:59:18 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:10.891 20:59:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1127089 ]] 00:06:10.891 20:59:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1127089 00:06:10.891 20:59:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1127089 ']' 00:06:10.891 20:59:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1127089 00:06:10.891 20:59:18 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:10.891 20:59:18 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.891 20:59:18 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1127089 00:06:11.149 20:59:19 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.149 20:59:19 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.149 20:59:19 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1127089' 00:06:11.149 killing process with pid 1127089 00:06:11.149 20:59:19 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1127089 00:06:11.149 20:59:19 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1127089 00:06:11.408 20:59:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1127258 ]] 00:06:11.408 20:59:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1127258 00:06:11.408 20:59:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1127258 ']' 00:06:11.408 20:59:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1127258 00:06:11.408 20:59:19 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:11.408 20:59:19 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.408 20:59:19 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1127258 00:06:11.408 20:59:19 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:11.408 20:59:19 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:11.408 20:59:19 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1127258' 00:06:11.408 killing process with pid 1127258 00:06:11.408 20:59:19 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1127258 00:06:11.408 20:59:19 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1127258 00:06:11.668 20:59:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:11.668 20:59:19 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:11.668 20:59:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1127089 ]] 00:06:11.668 20:59:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1127089 00:06:11.668 20:59:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1127089 ']' 00:06:11.668 20:59:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1127089 00:06:11.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1127089) - No such process 00:06:11.668 20:59:19 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1127089 is not found' 00:06:11.668 Process with pid 1127089 is not found 00:06:11.669 20:59:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1127258 ]] 00:06:11.669 20:59:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1127258 00:06:11.669 20:59:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1127258 ']' 00:06:11.669 20:59:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1127258 00:06:11.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1127258) - No such process 00:06:11.669 20:59:19 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1127258 is not found' 00:06:11.669 Process with pid 1127258 is not found 00:06:11.669 20:59:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:11.669 00:06:11.669 real 0m14.125s 00:06:11.669 user 0m24.529s 00:06:11.669 sys 0m5.004s 00:06:11.669 20:59:19 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.669 20:59:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.669 ************************************ 00:06:11.669 END TEST cpu_locks 00:06:11.669 ************************************ 00:06:11.669 00:06:11.669 real 0m38.828s 00:06:11.669 user 1m13.777s 00:06:11.669 sys 0m8.518s 00:06:11.669 20:59:19 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.669 20:59:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.669 ************************************ 00:06:11.669 END TEST event 00:06:11.669 ************************************ 00:06:11.669 20:59:19 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:11.669 20:59:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.669 20:59:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.669 20:59:19 -- common/autotest_common.sh@10 -- # set +x 00:06:11.929 ************************************ 00:06:11.929 START TEST thread 00:06:11.929 ************************************ 00:06:11.929 20:59:19 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:11.929 * Looking for test storage... 00:06:11.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:11.929 20:59:19 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:11.929 20:59:19 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:11.929 20:59:19 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:11.929 20:59:19 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:11.929 20:59:19 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.929 20:59:19 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.929 20:59:19 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.929 20:59:19 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.929 20:59:19 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.929 20:59:19 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.929 20:59:19 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.929 20:59:19 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.929 20:59:19 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.929 20:59:19 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.929 20:59:19 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.929 20:59:19 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:11.929 20:59:19 thread -- scripts/common.sh@345 -- # : 1 00:06:11.929 20:59:19 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.929 20:59:19 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.929 20:59:19 thread -- scripts/common.sh@365 -- # decimal 1 00:06:11.929 20:59:19 thread -- scripts/common.sh@353 -- # local d=1 00:06:11.929 20:59:19 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.929 20:59:19 thread -- scripts/common.sh@355 -- # echo 1 00:06:11.929 20:59:19 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.929 20:59:19 thread -- scripts/common.sh@366 -- # decimal 2 00:06:11.929 20:59:19 thread -- scripts/common.sh@353 -- # local d=2 00:06:11.929 20:59:19 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.929 20:59:19 thread -- scripts/common.sh@355 -- # echo 2 00:06:11.929 20:59:19 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.929 20:59:19 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.929 20:59:19 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.929 20:59:19 thread -- scripts/common.sh@368 -- # return 0 00:06:11.929 20:59:19 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.929 20:59:19 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:11.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.929 --rc genhtml_branch_coverage=1 00:06:11.929 --rc genhtml_function_coverage=1 00:06:11.929 --rc genhtml_legend=1 00:06:11.929 --rc geninfo_all_blocks=1 00:06:11.929 --rc geninfo_unexecuted_blocks=1 00:06:11.929 00:06:11.929 ' 00:06:11.929 20:59:19 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:11.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.929 --rc genhtml_branch_coverage=1 00:06:11.929 --rc genhtml_function_coverage=1 00:06:11.929 --rc genhtml_legend=1 00:06:11.929 --rc geninfo_all_blocks=1 00:06:11.929 --rc geninfo_unexecuted_blocks=1 00:06:11.929 00:06:11.929 ' 00:06:11.929 20:59:19 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:11.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.929 --rc genhtml_branch_coverage=1 00:06:11.929 --rc genhtml_function_coverage=1 00:06:11.929 --rc genhtml_legend=1 00:06:11.929 --rc geninfo_all_blocks=1 00:06:11.929 --rc geninfo_unexecuted_blocks=1 00:06:11.929 00:06:11.929 ' 00:06:11.929 20:59:19 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:11.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.929 --rc genhtml_branch_coverage=1 00:06:11.929 --rc genhtml_function_coverage=1 00:06:11.929 --rc genhtml_legend=1 00:06:11.929 --rc geninfo_all_blocks=1 00:06:11.929 --rc geninfo_unexecuted_blocks=1 00:06:11.929 00:06:11.929 ' 00:06:11.929 20:59:19 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:11.929 20:59:19 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:11.929 20:59:19 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.929 20:59:19 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.929 ************************************ 00:06:11.929 START TEST thread_poller_perf 00:06:11.929 ************************************ 00:06:11.929 20:59:20 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:12.188 [2024-12-05 20:59:20.038110] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:06:12.188 [2024-12-05 20:59:20.038188] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127695 ] 00:06:12.188 [2024-12-05 20:59:20.120730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.188 [2024-12-05 20:59:20.161404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.188 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:13.123 [2024-12-05T19:59:21.231Z] ====================================== 00:06:13.123 [2024-12-05T19:59:21.231Z] busy:2106854614 (cyc) 00:06:13.123 [2024-12-05T19:59:21.231Z] total_run_count: 421000 00:06:13.123 [2024-12-05T19:59:21.231Z] tsc_hz: 2100000000 (cyc) 00:06:13.123 [2024-12-05T19:59:21.231Z] ====================================== 00:06:13.123 [2024-12-05T19:59:21.231Z] poller_cost: 5004 (cyc), 2382 (nsec) 00:06:13.123 00:06:13.123 real 0m1.190s 00:06:13.123 user 0m1.104s 00:06:13.123 sys 0m0.081s 00:06:13.123 20:59:21 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.123 20:59:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.123 ************************************ 00:06:13.123 END TEST thread_poller_perf 00:06:13.123 ************************************ 00:06:13.381 20:59:21 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:13.381 20:59:21 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:13.381 20:59:21 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.381 20:59:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.381 ************************************ 00:06:13.381 START TEST thread_poller_perf 00:06:13.381 ************************************ 00:06:13.381 20:59:21 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:13.382 [2024-12-05 20:59:21.301242] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:06:13.382 [2024-12-05 20:59:21.301313] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127868 ] 00:06:13.382 [2024-12-05 20:59:21.380595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.382 [2024-12-05 20:59:21.421317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.382 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:14.758 [2024-12-05T19:59:22.866Z] ====================================== 00:06:14.758 [2024-12-05T19:59:22.866Z] busy:2101752478 (cyc) 00:06:14.758 [2024-12-05T19:59:22.866Z] total_run_count: 5179000 00:06:14.758 [2024-12-05T19:59:22.866Z] tsc_hz: 2100000000 (cyc) 00:06:14.758 [2024-12-05T19:59:22.866Z] ====================================== 00:06:14.758 [2024-12-05T19:59:22.866Z] poller_cost: 405 (cyc), 192 (nsec) 00:06:14.758 00:06:14.758 real 0m1.184s 00:06:14.758 user 0m1.108s 00:06:14.758 sys 0m0.072s 00:06:14.758 20:59:22 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.758 20:59:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.758 ************************************ 00:06:14.758 END TEST thread_poller_perf 00:06:14.758 ************************************ 00:06:14.758 20:59:22 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:14.758 00:06:14.758 real 0m2.694s 00:06:14.758 user 0m2.357s 00:06:14.758 sys 0m0.351s 00:06:14.758 20:59:22 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.758 20:59:22 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.758 ************************************ 00:06:14.758 END TEST thread 00:06:14.758 ************************************ 00:06:14.758 20:59:22 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:14.758 20:59:22 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:14.758 20:59:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.758 20:59:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.758 20:59:22 -- common/autotest_common.sh@10 -- # set +x 00:06:14.758 ************************************ 00:06:14.758 START TEST app_cmdline 00:06:14.758 ************************************ 00:06:14.758 20:59:22 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:14.758 * Looking for test storage... 00:06:14.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:14.758 20:59:22 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:14.758 20:59:22 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:14.758 20:59:22 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:14.758 20:59:22 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.758 20:59:22 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:14.758 20:59:22 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.758 20:59:22 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:14.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.758 --rc genhtml_branch_coverage=1 00:06:14.758 --rc genhtml_function_coverage=1 00:06:14.758 --rc genhtml_legend=1 00:06:14.758 --rc geninfo_all_blocks=1 00:06:14.758 --rc geninfo_unexecuted_blocks=1 00:06:14.758 00:06:14.758 ' 00:06:14.758 20:59:22 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:14.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.758 --rc genhtml_branch_coverage=1 00:06:14.758 --rc genhtml_function_coverage=1 00:06:14.758 --rc genhtml_legend=1 00:06:14.758 --rc geninfo_all_blocks=1 00:06:14.758 --rc geninfo_unexecuted_blocks=1 00:06:14.758 00:06:14.758 ' 00:06:14.758 20:59:22 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:14.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.758 --rc genhtml_branch_coverage=1 00:06:14.758 --rc genhtml_function_coverage=1 00:06:14.758 --rc genhtml_legend=1 00:06:14.758 --rc geninfo_all_blocks=1 00:06:14.758 --rc geninfo_unexecuted_blocks=1 00:06:14.758 00:06:14.758 ' 00:06:14.758 20:59:22 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:14.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.758 --rc genhtml_branch_coverage=1 00:06:14.758 --rc genhtml_function_coverage=1 00:06:14.758 --rc genhtml_legend=1 00:06:14.758 --rc geninfo_all_blocks=1 00:06:14.758 --rc geninfo_unexecuted_blocks=1 00:06:14.758 00:06:14.758 ' 00:06:14.758 20:59:22 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:14.758 20:59:22 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1128219 00:06:14.758 20:59:22 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1128219 00:06:14.758 20:59:22 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:14.758 20:59:22 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1128219 ']' 00:06:14.758 20:59:22 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.758 20:59:22 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.758 20:59:22 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.758 20:59:22 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.758 20:59:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.758 [2024-12-05 20:59:22.799397] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:06:14.758 [2024-12-05 20:59:22.799449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128219 ] 00:06:15.017 [2024-12-05 20:59:22.874299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.017 [2024-12-05 20:59:22.914578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.276 20:59:23 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.276 20:59:23 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:15.276 20:59:23 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:15.276 { 00:06:15.276 "version": "SPDK v25.01-pre git sha1 2b8672176", 00:06:15.276 "fields": { 00:06:15.276 "major": 25, 00:06:15.276 "minor": 1, 00:06:15.276 "patch": 0, 00:06:15.276 "suffix": "-pre", 00:06:15.276 "commit": "2b8672176" 00:06:15.276 } 00:06:15.276 } 00:06:15.276 20:59:23 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:15.276 20:59:23 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:15.276 20:59:23 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:15.276 20:59:23 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:15.276 20:59:23 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:15.276 20:59:23 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.276 20:59:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:15.276 20:59:23 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:15.276 20:59:23 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:15.276 20:59:23 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.276 20:59:23 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:15.276 20:59:23 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:15.276 20:59:23 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:15.276 20:59:23 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:15.276 20:59:23 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:15.276 20:59:23 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:15.276 20:59:23 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.276 20:59:23 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:15.276 20:59:23 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.276 20:59:23 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:15.276 20:59:23 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.276 20:59:23 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:15.276 20:59:23 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:15.276 20:59:23 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:15.535 request: 00:06:15.535 { 00:06:15.535 "method": "env_dpdk_get_mem_stats", 00:06:15.535 "req_id": 1 00:06:15.535 } 00:06:15.535 Got JSON-RPC error response 00:06:15.535 response: 00:06:15.535 { 00:06:15.535 "code": -32601, 00:06:15.535 "message": "Method not found" 00:06:15.535 } 00:06:15.535 20:59:23 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:15.535 20:59:23 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.535 20:59:23 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:15.535 20:59:23 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.535 20:59:23 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1128219 00:06:15.535 20:59:23 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1128219 ']' 00:06:15.535 20:59:23 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1128219 00:06:15.535 20:59:23 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:15.535 20:59:23 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.535 20:59:23 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1128219 00:06:15.535 20:59:23 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.535 20:59:23 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.535 20:59:23 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1128219' 00:06:15.535 killing process with pid 1128219 00:06:15.535 20:59:23 app_cmdline -- common/autotest_common.sh@973 -- # kill 1128219 00:06:15.535 20:59:23 app_cmdline -- common/autotest_common.sh@978 -- # wait 1128219 00:06:16.103 00:06:16.103 real 0m1.341s 00:06:16.103 user 0m1.552s 00:06:16.103 sys 0m0.460s 00:06:16.104 20:59:23 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.104 20:59:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:16.104 ************************************ 00:06:16.104 END TEST app_cmdline 00:06:16.104 ************************************ 00:06:16.104 20:59:23 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:16.104 20:59:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.104 20:59:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.104 20:59:23 -- common/autotest_common.sh@10 -- # set +x 00:06:16.104 ************************************ 00:06:16.104 START TEST version 00:06:16.104 ************************************ 00:06:16.104 20:59:23 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:16.104 * Looking for test storage... 00:06:16.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:16.104 20:59:24 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:16.104 20:59:24 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:16.104 20:59:24 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:16.104 20:59:24 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:16.104 20:59:24 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.104 20:59:24 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.104 20:59:24 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.104 20:59:24 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.104 20:59:24 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.104 20:59:24 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.104 20:59:24 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.104 20:59:24 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.104 20:59:24 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.104 20:59:24 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.104 20:59:24 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.104 20:59:24 version -- scripts/common.sh@344 -- # case "$op" in 00:06:16.104 20:59:24 version -- scripts/common.sh@345 -- # : 1 00:06:16.104 20:59:24 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.104 20:59:24 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.104 20:59:24 version -- scripts/common.sh@365 -- # decimal 1 00:06:16.104 20:59:24 version -- scripts/common.sh@353 -- # local d=1 00:06:16.104 20:59:24 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.104 20:59:24 version -- scripts/common.sh@355 -- # echo 1 00:06:16.104 20:59:24 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.104 20:59:24 version -- scripts/common.sh@366 -- # decimal 2 00:06:16.104 20:59:24 version -- scripts/common.sh@353 -- # local d=2 00:06:16.104 20:59:24 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.104 20:59:24 version -- scripts/common.sh@355 -- # echo 2 00:06:16.104 20:59:24 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.104 20:59:24 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.104 20:59:24 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.104 20:59:24 version -- scripts/common.sh@368 -- # return 0 00:06:16.104 20:59:24 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.104 20:59:24 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:16.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.104 --rc genhtml_branch_coverage=1 00:06:16.104 --rc genhtml_function_coverage=1 00:06:16.104 --rc genhtml_legend=1 00:06:16.104 --rc geninfo_all_blocks=1 00:06:16.104 --rc geninfo_unexecuted_blocks=1 00:06:16.104 00:06:16.104 ' 00:06:16.104 20:59:24 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:16.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.104 --rc genhtml_branch_coverage=1 00:06:16.104 --rc genhtml_function_coverage=1 00:06:16.104 --rc genhtml_legend=1 00:06:16.104 --rc geninfo_all_blocks=1 00:06:16.104 --rc geninfo_unexecuted_blocks=1 00:06:16.104 00:06:16.104 ' 00:06:16.104 20:59:24 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:16.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.104 --rc genhtml_branch_coverage=1 00:06:16.104 --rc genhtml_function_coverage=1 00:06:16.104 --rc genhtml_legend=1 00:06:16.104 --rc geninfo_all_blocks=1 00:06:16.104 --rc geninfo_unexecuted_blocks=1 00:06:16.104 00:06:16.104 ' 00:06:16.104 20:59:24 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:16.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.104 --rc genhtml_branch_coverage=1 00:06:16.104 --rc genhtml_function_coverage=1 00:06:16.104 --rc genhtml_legend=1 00:06:16.104 --rc geninfo_all_blocks=1 00:06:16.104 --rc geninfo_unexecuted_blocks=1 00:06:16.104 00:06:16.104 ' 00:06:16.104 20:59:24 version -- app/version.sh@17 -- # get_header_version major 00:06:16.104 20:59:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:16.104 20:59:24 version -- app/version.sh@14 -- # cut -f2 00:06:16.104 20:59:24 version -- app/version.sh@14 -- # tr -d '"' 00:06:16.104 20:59:24 version -- app/version.sh@17 -- # major=25 00:06:16.104 20:59:24 version -- app/version.sh@18 -- # get_header_version minor 00:06:16.104 20:59:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:16.104 20:59:24 version -- app/version.sh@14 -- # cut -f2 00:06:16.104 20:59:24 version -- app/version.sh@14 -- # tr -d '"' 00:06:16.104 20:59:24 version -- app/version.sh@18 -- # minor=1 00:06:16.104 20:59:24 version -- app/version.sh@19 -- # get_header_version patch 00:06:16.104 20:59:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:16.104 20:59:24 version -- app/version.sh@14 -- # cut -f2 00:06:16.104 20:59:24 version -- app/version.sh@14 -- # tr -d '"' 00:06:16.104 20:59:24 version -- app/version.sh@19 -- # patch=0 00:06:16.104 20:59:24 version -- app/version.sh@20 -- # get_header_version suffix 00:06:16.104 20:59:24 version -- app/version.sh@14 -- # cut -f2 00:06:16.104 20:59:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:16.104 20:59:24 version -- app/version.sh@14 -- # tr -d '"' 00:06:16.104 20:59:24 version -- app/version.sh@20 -- # suffix=-pre 00:06:16.104 20:59:24 version -- app/version.sh@22 -- # version=25.1 00:06:16.104 20:59:24 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:16.104 20:59:24 version -- app/version.sh@28 -- # version=25.1rc0 00:06:16.104 20:59:24 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:16.104 20:59:24 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:16.363 20:59:24 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:16.363 20:59:24 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:16.363 00:06:16.363 real 0m0.245s 00:06:16.363 user 0m0.140s 00:06:16.363 sys 0m0.148s 00:06:16.363 20:59:24 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.363 20:59:24 version -- common/autotest_common.sh@10 -- # set +x 00:06:16.363 ************************************ 00:06:16.363 END TEST version 00:06:16.363 ************************************ 00:06:16.363 20:59:24 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:16.363 20:59:24 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:16.363 20:59:24 -- spdk/autotest.sh@194 -- # uname -s 00:06:16.363 20:59:24 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:16.363 20:59:24 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:16.363 20:59:24 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:16.363 20:59:24 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:16.363 20:59:24 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:16.363 20:59:24 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:16.363 20:59:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:16.363 20:59:24 -- common/autotest_common.sh@10 -- # set +x 00:06:16.363 20:59:24 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:16.363 20:59:24 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:16.363 20:59:24 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:16.363 20:59:24 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:16.363 20:59:24 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:16.363 20:59:24 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:16.363 20:59:24 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:16.363 20:59:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:16.363 20:59:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.363 20:59:24 -- common/autotest_common.sh@10 -- # set +x 00:06:16.363 ************************************ 00:06:16.363 START TEST nvmf_tcp 00:06:16.363 ************************************ 00:06:16.363 20:59:24 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:16.363 * Looking for test storage... 00:06:16.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:16.363 20:59:24 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:16.363 20:59:24 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:16.363 20:59:24 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:16.623 20:59:24 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.623 20:59:24 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:16.623 20:59:24 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.623 20:59:24 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:16.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.623 --rc genhtml_branch_coverage=1 00:06:16.623 --rc genhtml_function_coverage=1 00:06:16.623 --rc genhtml_legend=1 00:06:16.624 --rc geninfo_all_blocks=1 00:06:16.624 --rc geninfo_unexecuted_blocks=1 00:06:16.624 00:06:16.624 ' 00:06:16.624 20:59:24 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:16.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.624 --rc genhtml_branch_coverage=1 00:06:16.624 --rc genhtml_function_coverage=1 00:06:16.624 --rc genhtml_legend=1 00:06:16.624 --rc geninfo_all_blocks=1 00:06:16.624 --rc geninfo_unexecuted_blocks=1 00:06:16.624 00:06:16.624 ' 00:06:16.624 20:59:24 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:16.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.624 --rc genhtml_branch_coverage=1 00:06:16.624 --rc genhtml_function_coverage=1 00:06:16.624 --rc genhtml_legend=1 00:06:16.624 --rc geninfo_all_blocks=1 00:06:16.624 --rc geninfo_unexecuted_blocks=1 00:06:16.624 00:06:16.624 ' 00:06:16.624 20:59:24 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:16.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.624 --rc genhtml_branch_coverage=1 00:06:16.624 --rc genhtml_function_coverage=1 00:06:16.624 --rc genhtml_legend=1 00:06:16.624 --rc geninfo_all_blocks=1 00:06:16.624 --rc geninfo_unexecuted_blocks=1 00:06:16.624 00:06:16.624 ' 00:06:16.624 20:59:24 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:16.624 20:59:24 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:16.624 20:59:24 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:16.624 20:59:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:16.624 20:59:24 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.624 20:59:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.624 ************************************ 00:06:16.624 START TEST nvmf_target_core 00:06:16.624 ************************************ 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:16.624 * Looking for test storage... 00:06:16.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:16.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.624 --rc genhtml_branch_coverage=1 00:06:16.624 --rc genhtml_function_coverage=1 00:06:16.624 --rc genhtml_legend=1 00:06:16.624 --rc geninfo_all_blocks=1 00:06:16.624 --rc geninfo_unexecuted_blocks=1 00:06:16.624 00:06:16.624 ' 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:16.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.624 --rc genhtml_branch_coverage=1 00:06:16.624 --rc genhtml_function_coverage=1 00:06:16.624 --rc genhtml_legend=1 00:06:16.624 --rc geninfo_all_blocks=1 00:06:16.624 --rc geninfo_unexecuted_blocks=1 00:06:16.624 00:06:16.624 ' 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:16.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.624 --rc genhtml_branch_coverage=1 00:06:16.624 --rc genhtml_function_coverage=1 00:06:16.624 --rc genhtml_legend=1 00:06:16.624 --rc geninfo_all_blocks=1 00:06:16.624 --rc geninfo_unexecuted_blocks=1 00:06:16.624 00:06:16.624 ' 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:16.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.624 --rc genhtml_branch_coverage=1 00:06:16.624 --rc genhtml_function_coverage=1 00:06:16.624 --rc genhtml_legend=1 00:06:16.624 --rc geninfo_all_blocks=1 00:06:16.624 --rc geninfo_unexecuted_blocks=1 00:06:16.624 00:06:16.624 ' 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:16.624 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:16.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:16.884 ************************************ 00:06:16.884 START TEST nvmf_abort 00:06:16.884 ************************************ 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:16.884 * Looking for test storage... 00:06:16.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.884 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:16.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.885 --rc genhtml_branch_coverage=1 00:06:16.885 --rc genhtml_function_coverage=1 00:06:16.885 --rc genhtml_legend=1 00:06:16.885 --rc geninfo_all_blocks=1 00:06:16.885 --rc geninfo_unexecuted_blocks=1 00:06:16.885 00:06:16.885 ' 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:16.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.885 --rc genhtml_branch_coverage=1 00:06:16.885 --rc genhtml_function_coverage=1 00:06:16.885 --rc genhtml_legend=1 00:06:16.885 --rc geninfo_all_blocks=1 00:06:16.885 --rc geninfo_unexecuted_blocks=1 00:06:16.885 00:06:16.885 ' 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:16.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.885 --rc genhtml_branch_coverage=1 00:06:16.885 --rc genhtml_function_coverage=1 00:06:16.885 --rc genhtml_legend=1 00:06:16.885 --rc geninfo_all_blocks=1 00:06:16.885 --rc geninfo_unexecuted_blocks=1 00:06:16.885 00:06:16.885 ' 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:16.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.885 --rc genhtml_branch_coverage=1 00:06:16.885 --rc genhtml_function_coverage=1 00:06:16.885 --rc genhtml_legend=1 00:06:16.885 --rc geninfo_all_blocks=1 00:06:16.885 --rc geninfo_unexecuted_blocks=1 00:06:16.885 00:06:16.885 ' 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:16.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:16.885 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:17.144 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:17.144 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.144 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:17.144 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.144 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:17.144 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:17.144 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:17.145 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.715 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:23.715 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:23.715 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:23.716 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:23.716 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:23.716 Found net devices under 0000:86:00.0: cvl_0_0 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:23.716 Found net devices under 0000:86:00.1: cvl_0_1 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:23.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:23.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:06:23.716 00:06:23.716 --- 10.0.0.2 ping statistics --- 00:06:23.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.716 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:23.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:23.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:06:23.716 00:06:23.716 --- 10.0.0.1 ping statistics --- 00:06:23.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.716 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:23.716 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:23.717 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:23.717 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:23.717 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:23.717 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:23.717 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.717 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.717 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1131842 00:06:23.717 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1131842 00:06:23.717 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:23.717 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1131842 ']' 00:06:23.717 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.717 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.717 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.717 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.717 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.717 [2024-12-05 20:59:31.076625] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:06:23.717 [2024-12-05 20:59:31.076670] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.717 [2024-12-05 20:59:31.154000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.717 [2024-12-05 20:59:31.196615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:23.717 [2024-12-05 20:59:31.196653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:23.717 [2024-12-05 20:59:31.196660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.717 [2024-12-05 20:59:31.196666] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.717 [2024-12-05 20:59:31.196672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:23.717 [2024-12-05 20:59:31.198228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.717 [2024-12-05 20:59:31.198334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.717 [2024-12-05 20:59:31.198335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.976 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.976 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:23.976 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:23.976 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:23.976 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.976 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:23.976 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:23.976 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.976 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.976 [2024-12-05 20:59:31.949729] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.976 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.976 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:23.976 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.976 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.976 Malloc0 00:06:23.976 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.976 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:23.976 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.976 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.976 Delay0 00:06:23.976 20:59:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.976 20:59:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:23.976 20:59:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.976 20:59:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.976 20:59:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.976 20:59:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:23.976 20:59:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.976 20:59:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.976 20:59:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.976 20:59:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:23.976 20:59:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.976 20:59:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.976 [2024-12-05 20:59:32.027793] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:23.976 20:59:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.976 20:59:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:23.976 20:59:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.976 20:59:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.976 20:59:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.976 20:59:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:24.236 [2024-12-05 20:59:32.165084] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:26.139 Initializing NVMe Controllers 00:06:26.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:26.139 controller IO queue size 128 less than required 00:06:26.139 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:26.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:26.139 Initialization complete. Launching workers. 00:06:26.139 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37515 00:06:26.139 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37580, failed to submit 62 00:06:26.139 success 37519, unsuccessful 61, failed 0 00:06:26.139 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:26.139 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.139 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:26.397 rmmod nvme_tcp 00:06:26.397 rmmod nvme_fabrics 00:06:26.397 rmmod nvme_keyring 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1131842 ']' 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1131842 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1131842 ']' 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1131842 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1131842 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1131842' 00:06:26.397 killing process with pid 1131842 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1131842 00:06:26.397 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1131842 00:06:26.656 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:26.656 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:26.656 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:26.656 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:26.656 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:26.656 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:26.656 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:26.656 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:26.656 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:26.656 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.656 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:26.656 20:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.583 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:28.583 00:06:28.583 real 0m11.847s 00:06:28.583 user 0m13.613s 00:06:28.583 sys 0m5.515s 00:06:28.583 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.583 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.583 ************************************ 00:06:28.583 END TEST nvmf_abort 00:06:28.583 ************************************ 00:06:28.583 20:59:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:28.583 20:59:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:28.583 20:59:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.583 20:59:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:28.841 ************************************ 00:06:28.841 START TEST nvmf_ns_hotplug_stress 00:06:28.841 ************************************ 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:28.841 * Looking for test storage... 00:06:28.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.841 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:28.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.842 --rc genhtml_branch_coverage=1 00:06:28.842 --rc genhtml_function_coverage=1 00:06:28.842 --rc genhtml_legend=1 00:06:28.842 --rc geninfo_all_blocks=1 00:06:28.842 --rc geninfo_unexecuted_blocks=1 00:06:28.842 00:06:28.842 ' 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:28.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.842 --rc genhtml_branch_coverage=1 00:06:28.842 --rc genhtml_function_coverage=1 00:06:28.842 --rc genhtml_legend=1 00:06:28.842 --rc geninfo_all_blocks=1 00:06:28.842 --rc geninfo_unexecuted_blocks=1 00:06:28.842 00:06:28.842 ' 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:28.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.842 --rc genhtml_branch_coverage=1 00:06:28.842 --rc genhtml_function_coverage=1 00:06:28.842 --rc genhtml_legend=1 00:06:28.842 --rc geninfo_all_blocks=1 00:06:28.842 --rc geninfo_unexecuted_blocks=1 00:06:28.842 00:06:28.842 ' 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:28.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.842 --rc genhtml_branch_coverage=1 00:06:28.842 --rc genhtml_function_coverage=1 00:06:28.842 --rc genhtml_legend=1 00:06:28.842 --rc geninfo_all_blocks=1 00:06:28.842 --rc geninfo_unexecuted_blocks=1 00:06:28.842 00:06:28.842 ' 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:28.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:28.842 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:35.409 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:35.409 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:35.409 Found net devices under 0000:86:00.0: cvl_0_0 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:35.409 Found net devices under 0000:86:00.1: cvl_0_1 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:35.409 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:35.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:35.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:06:35.410 00:06:35.410 --- 10.0.0.2 ping statistics --- 00:06:35.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.410 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:35.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:35.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:06:35.410 00:06:35.410 --- 10.0.0.1 ping statistics --- 00:06:35.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.410 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1136090 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1136090 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1136090 ']' 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.410 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:35.410 [2024-12-05 20:59:43.043825] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:06:35.410 [2024-12-05 20:59:43.043865] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.410 [2024-12-05 20:59:43.119910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.410 [2024-12-05 20:59:43.159341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:35.410 [2024-12-05 20:59:43.159381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:35.410 [2024-12-05 20:59:43.159388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.410 [2024-12-05 20:59:43.159394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.410 [2024-12-05 20:59:43.159399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:35.410 [2024-12-05 20:59:43.160864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.410 [2024-12-05 20:59:43.160952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.410 [2024-12-05 20:59:43.160954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.410 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.410 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:35.410 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:35.410 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:35.410 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:35.410 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:35.410 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:35.410 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:35.410 [2024-12-05 20:59:43.466886] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.410 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:35.668 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:35.926 [2024-12-05 20:59:43.880375] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:35.926 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:36.184 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:36.184 Malloc0 00:06:36.443 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:36.443 Delay0 00:06:36.443 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.724 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:37.034 NULL1 00:06:37.034 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:37.034 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1136398 00:06:37.034 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:37.034 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:37.034 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.303 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.597 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:37.597 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:37.856 true 00:06:37.856 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:37.856 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.856 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.113 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:38.113 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:38.371 true 00:06:38.371 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:38.371 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.629 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.887 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:38.887 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:39.146 true 00:06:39.146 20:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:39.146 20:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.146 20:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.405 20:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:39.405 20:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:39.664 true 00:06:39.664 20:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:39.664 20:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.922 20:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.192 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:40.192 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:40.192 true 00:06:40.450 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:40.450 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.450 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.708 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:40.708 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:40.967 true 00:06:40.967 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:40.967 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.226 20:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.484 20:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:41.484 20:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:41.743 true 00:06:41.743 20:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:41.743 20:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.743 20:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.002 20:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:42.002 20:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:42.261 true 00:06:42.261 20:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:42.261 20:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.520 20:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.779 20:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:42.779 20:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:43.038 true 00:06:43.038 20:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:43.038 20:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.038 20:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.296 20:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:43.296 20:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:43.555 true 00:06:43.555 20:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:43.555 20:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.814 20:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.073 20:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:44.073 20:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:44.331 true 00:06:44.331 20:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:44.331 20:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.331 20:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.590 20:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:44.590 20:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:44.848 true 00:06:44.848 20:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:44.848 20:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.107 20:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.365 20:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:45.365 20:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:45.623 true 00:06:45.623 20:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:45.623 20:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.623 20:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.881 20:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:45.881 20:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:46.139 true 00:06:46.139 20:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:46.139 20:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.397 20:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.654 20:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:46.654 20:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:46.912 true 00:06:46.912 20:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:46.912 20:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.191 20:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.191 20:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:47.191 20:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:47.448 true 00:06:47.448 20:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:47.448 20:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.705 20:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.963 20:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:47.963 20:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:47.963 true 00:06:48.220 20:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:48.220 20:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.220 20:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.478 20:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:48.478 20:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:48.736 true 00:06:48.736 20:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:48.736 20:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.994 20:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.254 20:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:49.254 20:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:49.254 true 00:06:49.512 20:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:49.513 20:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.513 20:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.772 20:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:49.772 20:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:50.030 true 00:06:50.030 20:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:50.030 20:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.289 20:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.548 20:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:50.548 20:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:50.548 true 00:06:50.807 20:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:50.807 20:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.807 20:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.066 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:51.066 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:51.324 true 00:06:51.324 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:51.324 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.583 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.842 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:51.843 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:51.843 true 00:06:52.102 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:52.102 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.102 21:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.361 21:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:52.361 21:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:52.620 true 00:06:52.620 21:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:52.620 21:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.879 21:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.138 21:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:53.138 21:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:53.138 true 00:06:53.396 21:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:53.396 21:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.396 21:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.655 21:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:53.655 21:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:53.914 true 00:06:53.914 21:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:53.914 21:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.173 21:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.433 21:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:54.433 21:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:54.692 true 00:06:54.692 21:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:54.692 21:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.692 21:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.951 21:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:54.951 21:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:55.210 true 00:06:55.210 21:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:55.210 21:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.469 21:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.729 21:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:55.729 21:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:55.988 true 00:06:55.988 21:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:55.988 21:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.988 21:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.247 21:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:56.247 21:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:56.506 true 00:06:56.506 21:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:56.506 21:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.764 21:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.023 21:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:57.023 21:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:57.281 true 00:06:57.281 21:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:57.281 21:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.281 21:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.539 21:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:57.539 21:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:57.797 true 00:06:57.797 21:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:57.797 21:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.054 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.312 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:58.312 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:58.571 true 00:06:58.571 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:58.571 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.571 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.829 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:58.829 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:59.088 true 00:06:59.088 21:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:59.088 21:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.347 21:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.605 21:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:59.605 21:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:59.863 true 00:06:59.863 21:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:06:59.863 21:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.123 21:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.123 21:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:00.123 21:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:00.382 true 00:07:00.382 21:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:07:00.382 21:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.640 21:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.899 21:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:00.899 21:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:01.158 true 00:07:01.159 21:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:07:01.159 21:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.417 21:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.417 21:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:01.417 21:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:01.676 true 00:07:01.676 21:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:07:01.676 21:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.935 21:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.194 21:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:02.194 21:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:02.452 true 00:07:02.452 21:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:07:02.452 21:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.711 21:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.711 21:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:02.711 21:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:02.970 true 00:07:02.970 21:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:07:02.970 21:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.230 21:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.488 21:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:03.488 21:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:03.747 true 00:07:03.747 21:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:07:03.747 21:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.005 21:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.005 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:04.005 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:04.265 true 00:07:04.265 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:07:04.265 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.524 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.783 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:04.783 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:05.042 true 00:07:05.042 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:07:05.042 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.301 21:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.301 21:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:05.301 21:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:05.559 true 00:07:05.559 21:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:07:05.559 21:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.818 21:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.076 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:06.076 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:06.334 true 00:07:06.334 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:07:06.334 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.592 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.592 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:06.592 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:06.850 true 00:07:06.850 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:07:06.850 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.108 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.366 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:07.366 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:07.366 Initializing NVMe Controllers 00:07:07.366 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:07.366 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:07:07.366 Controller IO queue size 128, less than required. 00:07:07.366 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:07.366 WARNING: Some requested NVMe devices were skipped 00:07:07.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:07.366 Initialization complete. Launching workers. 00:07:07.366 ======================================================== 00:07:07.366 Latency(us) 00:07:07.366 Device Information : IOPS MiB/s Average min max 00:07:07.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27473.28 13.41 4659.06 2134.75 8528.66 00:07:07.366 ======================================================== 00:07:07.366 Total : 27473.28 13.41 4659.06 2134.75 8528.66 00:07:07.366 00:07:07.623 true 00:07:07.623 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1136398 00:07:07.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1136398) - No such process 00:07:07.623 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1136398 00:07:07.624 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.881 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.881 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:07.881 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:07.881 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:07.881 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.881 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:08.138 null0 00:07:08.138 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.138 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.138 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:08.396 null1 00:07:08.396 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.396 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.396 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:08.396 null2 00:07:08.658 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.658 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.658 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:08.658 null3 00:07:08.658 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.658 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.658 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:08.917 null4 00:07:08.917 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.917 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.917 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:09.175 null5 00:07:09.175 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:09.175 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:09.175 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:09.433 null6 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:09.433 null7 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.433 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.692 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.692 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.692 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.692 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:09.692 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:09.692 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1142529 1142530 1142531 1142534 1142536 1142538 1142539 1142542 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.693 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.950 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.950 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.950 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.951 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.209 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.209 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.209 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:10.209 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:10.209 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:10.209 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.209 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.209 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.468 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.727 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.728 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.728 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.987 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:10.987 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.987 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:10.987 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.987 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.987 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:10.987 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.987 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.246 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.246 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.246 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.246 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.246 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.246 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.246 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.246 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.246 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.246 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.246 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.246 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.246 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.246 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.246 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.246 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.246 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.247 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.247 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.247 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.247 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.247 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.247 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.247 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.506 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.507 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.507 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:11.507 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.507 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.507 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.507 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.507 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:11.507 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.507 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.507 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:11.766 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.767 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.767 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.767 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.767 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.767 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.025 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.025 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.025 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.025 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.025 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.025 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.025 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.025 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.025 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.026 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.026 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.026 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.026 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.026 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.026 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.026 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.026 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.026 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.026 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.026 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.026 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.026 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.026 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.026 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.284 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.284 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.284 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.284 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.284 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.284 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.284 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.284 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.544 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.803 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.062 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.062 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.062 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.062 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.062 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.062 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.062 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.062 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.344 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:13.655 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:13.655 rmmod nvme_tcp 00:07:13.655 rmmod nvme_fabrics 00:07:13.949 rmmod nvme_keyring 00:07:13.949 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:13.949 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:13.949 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:13.949 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1136090 ']' 00:07:13.949 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1136090 00:07:13.949 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1136090 ']' 00:07:13.949 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1136090 00:07:13.949 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:13.949 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.949 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1136090 00:07:13.949 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:13.949 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:13.949 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1136090' 00:07:13.949 killing process with pid 1136090 00:07:13.949 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1136090 00:07:13.949 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1136090 00:07:13.949 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:13.949 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:13.949 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:13.949 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:13.949 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:13.949 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:13.949 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:13.949 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:13.949 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:13.949 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.949 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.949 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.507 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:16.507 00:07:16.507 real 0m47.380s 00:07:16.507 user 3m20.410s 00:07:16.507 sys 0m17.219s 00:07:16.507 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.507 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:16.507 ************************************ 00:07:16.507 END TEST nvmf_ns_hotplug_stress 00:07:16.507 ************************************ 00:07:16.507 21:00:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:16.507 21:00:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:16.507 21:00:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.507 21:00:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:16.507 ************************************ 00:07:16.507 START TEST nvmf_delete_subsystem 00:07:16.507 ************************************ 00:07:16.507 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:16.508 * Looking for test storage... 00:07:16.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:16.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.508 --rc genhtml_branch_coverage=1 00:07:16.508 --rc genhtml_function_coverage=1 00:07:16.508 --rc genhtml_legend=1 00:07:16.508 --rc geninfo_all_blocks=1 00:07:16.508 --rc geninfo_unexecuted_blocks=1 00:07:16.508 00:07:16.508 ' 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:16.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.508 --rc genhtml_branch_coverage=1 00:07:16.508 --rc genhtml_function_coverage=1 00:07:16.508 --rc genhtml_legend=1 00:07:16.508 --rc geninfo_all_blocks=1 00:07:16.508 --rc geninfo_unexecuted_blocks=1 00:07:16.508 00:07:16.508 ' 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:16.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.508 --rc genhtml_branch_coverage=1 00:07:16.508 --rc genhtml_function_coverage=1 00:07:16.508 --rc genhtml_legend=1 00:07:16.508 --rc geninfo_all_blocks=1 00:07:16.508 --rc geninfo_unexecuted_blocks=1 00:07:16.508 00:07:16.508 ' 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:16.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.508 --rc genhtml_branch_coverage=1 00:07:16.508 --rc genhtml_function_coverage=1 00:07:16.508 --rc genhtml_legend=1 00:07:16.508 --rc geninfo_all_blocks=1 00:07:16.508 --rc geninfo_unexecuted_blocks=1 00:07:16.508 00:07:16.508 ' 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.508 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:16.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:16.509 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.082 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:23.083 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:23.083 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:23.083 Found net devices under 0000:86:00.0: cvl_0_0 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:23.083 Found net devices under 0000:86:00.1: cvl_0_1 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.083 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:23.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:07:23.084 00:07:23.084 --- 10.0.0.2 ping statistics --- 00:07:23.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.084 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:07:23.084 00:07:23.084 --- 10.0.0.1 ping statistics --- 00:07:23.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.084 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1146939 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1146939 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1146939 ']' 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.084 21:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.084 [2024-12-05 21:00:30.449552] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:07:23.084 [2024-12-05 21:00:30.449600] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.084 [2024-12-05 21:00:30.530036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:23.084 [2024-12-05 21:00:30.573019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.084 [2024-12-05 21:00:30.573057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.084 [2024-12-05 21:00:30.573065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.084 [2024-12-05 21:00:30.573072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.084 [2024-12-05 21:00:30.573078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.084 [2024-12-05 21:00:30.574338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.084 [2024-12-05 21:00:30.574339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.343 [2024-12-05 21:00:31.325606] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.343 [2024-12-05 21:00:31.345804] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.343 NULL1 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:23.343 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.344 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.344 Delay0 00:07:23.344 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.344 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.344 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.344 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.344 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.344 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1147180 00:07:23.344 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:23.344 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:23.344 [2024-12-05 21:00:31.446714] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:25.879 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:25.879 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.879 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.879 Write completed with error (sct=0, sc=8) 00:07:25.879 Write completed with error (sct=0, sc=8) 00:07:25.879 Write completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 starting I/O failed: -6 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Write completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 starting I/O failed: -6 00:07:25.879 Write completed with error (sct=0, sc=8) 00:07:25.879 Write completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 starting I/O failed: -6 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 starting I/O failed: -6 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 starting I/O failed: -6 00:07:25.879 Write completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 starting I/O failed: -6 00:07:25.879 Write completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Write completed with error (sct=0, sc=8) 00:07:25.879 starting I/O failed: -6 00:07:25.879 Write completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 starting I/O failed: -6 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 starting I/O failed: -6 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Write completed with error (sct=0, sc=8) 00:07:25.879 starting I/O failed: -6 00:07:25.879 starting I/O failed: -6 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Write completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Write completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Read completed with error (sct=0, sc=8) 00:07:25.879 Write completed with error (sct=0, sc=8) 00:07:25.879 Write completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 [2024-12-05 21:00:33.653602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a4a0 is same with the state(6) to be set 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 [2024-12-05 21:00:33.653838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a860 is same with the state(6) to be set 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 starting I/O failed: -6 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 starting I/O failed: -6 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 starting I/O failed: -6 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 starting I/O failed: -6 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 starting I/O failed: -6 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 starting I/O failed: -6 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 starting I/O failed: -6 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 starting I/O failed: -6 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 Read completed with error (sct=0, sc=8) 00:07:25.880 starting I/O failed: -6 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.880 Write completed with error (sct=0, sc=8) 00:07:25.881 Read completed with error (sct=0, sc=8) 00:07:25.881 Read completed with error (sct=0, sc=8) 00:07:25.881 starting I/O failed: -6 00:07:25.881 Read completed with error (sct=0, sc=8) 00:07:25.881 Read completed with error (sct=0, sc=8) 00:07:25.881 Read completed with error (sct=0, sc=8) 00:07:25.881 starting I/O failed: -6 00:07:25.881 starting I/O failed: -6 00:07:25.881 starting I/O failed: -6 00:07:25.881 starting I/O failed: -6 00:07:25.881 starting I/O failed: -6 00:07:25.881 starting I/O failed: -6 00:07:26.818 [2024-12-05 21:00:34.625434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b9b0 is same with the state(6) to be set 00:07:26.818 Write completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Write completed with error (sct=0, sc=8) 00:07:26.818 Write completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Write completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Write completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Write completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Write completed with error (sct=0, sc=8) 00:07:26.818 Write completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 [2024-12-05 21:00:34.655659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a680 is same with the state(6) to be set 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Write completed with error (sct=0, sc=8) 00:07:26.818 Write completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Write completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Write completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Write completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Write completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.818 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 [2024-12-05 21:00:34.657972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3c18000c40 is same with the state(6) to be set 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 [2024-12-05 21:00:34.658255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3c1800d7e0 is same with the state(6) to be set 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Write completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 Read completed with error (sct=0, sc=8) 00:07:26.819 [2024-12-05 21:00:34.658801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3c1800d020 is same with the state(6) to be set 00:07:26.819 Initializing NVMe Controllers 00:07:26.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:26.819 Controller IO queue size 128, less than required. 00:07:26.819 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:26.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:26.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:26.819 Initialization complete. Launching workers. 00:07:26.819 ======================================================== 00:07:26.819 Latency(us) 00:07:26.819 Device Information : IOPS MiB/s Average min max 00:07:26.819 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 157.26 0.08 870806.55 314.82 1007502.61 00:07:26.819 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 173.68 0.08 1024122.94 323.45 2000210.80 00:07:26.819 ======================================================== 00:07:26.819 Total : 330.94 0.16 951268.84 314.82 2000210.80 00:07:26.819 00:07:26.819 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.819 [2024-12-05 21:00:34.659365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216b9b0 (9): Bad file descriptor 00:07:26.819 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:26.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:26.819 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1147180 00:07:26.819 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1147180 00:07:27.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1147180) - No such process 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1147180 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1147180 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1147180 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.077 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.335 [2024-12-05 21:00:35.188327] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.335 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.335 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.335 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.335 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.335 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.335 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1147870 00:07:27.335 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:27.335 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:27.335 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1147870 00:07:27.335 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:27.335 [2024-12-05 21:00:35.280363] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:27.901 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:27.901 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1147870 00:07:27.901 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:28.159 21:00:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:28.159 21:00:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1147870 00:07:28.159 21:00:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:28.725 21:00:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:28.725 21:00:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1147870 00:07:28.725 21:00:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:29.291 21:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:29.291 21:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1147870 00:07:29.291 21:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:29.857 21:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:29.857 21:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1147870 00:07:29.857 21:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:30.425 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:30.425 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1147870 00:07:30.425 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:30.425 Initializing NVMe Controllers 00:07:30.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:30.425 Controller IO queue size 128, less than required. 00:07:30.425 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:30.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:30.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:30.425 Initialization complete. Launching workers. 00:07:30.425 ======================================================== 00:07:30.425 Latency(us) 00:07:30.425 Device Information : IOPS MiB/s Average min max 00:07:30.425 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003121.71 1000132.41 1009961.23 00:07:30.425 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004990.21 1000491.10 1043004.79 00:07:30.425 ======================================================== 00:07:30.425 Total : 256.00 0.12 1004055.96 1000132.41 1043004.79 00:07:30.425 00:07:30.684 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:30.684 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1147870 00:07:30.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1147870) - No such process 00:07:30.684 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1147870 00:07:30.684 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:30.684 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:30.684 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:30.684 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:30.684 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:30.684 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:30.684 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:30.684 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:30.684 rmmod nvme_tcp 00:07:30.684 rmmod nvme_fabrics 00:07:30.684 rmmod nvme_keyring 00:07:30.943 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:30.943 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:30.943 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:30.943 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1146939 ']' 00:07:30.943 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1146939 00:07:30.943 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1146939 ']' 00:07:30.943 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1146939 00:07:30.943 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:30.943 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.943 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1146939 00:07:30.943 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.943 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.943 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1146939' 00:07:30.943 killing process with pid 1146939 00:07:30.943 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1146939 00:07:30.943 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1146939 00:07:30.943 21:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:30.943 21:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:30.943 21:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:30.943 21:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:30.943 21:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:30.943 21:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:30.943 21:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:30.943 21:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:30.943 21:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:30.943 21:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.943 21:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.943 21:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.480 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:33.480 00:07:33.480 real 0m16.931s 00:07:33.480 user 0m30.920s 00:07:33.480 sys 0m5.552s 00:07:33.480 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.480 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.480 ************************************ 00:07:33.480 END TEST nvmf_delete_subsystem 00:07:33.480 ************************************ 00:07:33.480 21:00:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:33.480 21:00:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:33.480 21:00:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.480 21:00:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:33.480 ************************************ 00:07:33.480 START TEST nvmf_host_management 00:07:33.480 ************************************ 00:07:33.480 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:33.480 * Looking for test storage... 00:07:33.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.480 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:33.480 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:33.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.481 --rc genhtml_branch_coverage=1 00:07:33.481 --rc genhtml_function_coverage=1 00:07:33.481 --rc genhtml_legend=1 00:07:33.481 --rc geninfo_all_blocks=1 00:07:33.481 --rc geninfo_unexecuted_blocks=1 00:07:33.481 00:07:33.481 ' 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:33.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.481 --rc genhtml_branch_coverage=1 00:07:33.481 --rc genhtml_function_coverage=1 00:07:33.481 --rc genhtml_legend=1 00:07:33.481 --rc geninfo_all_blocks=1 00:07:33.481 --rc geninfo_unexecuted_blocks=1 00:07:33.481 00:07:33.481 ' 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:33.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.481 --rc genhtml_branch_coverage=1 00:07:33.481 --rc genhtml_function_coverage=1 00:07:33.481 --rc genhtml_legend=1 00:07:33.481 --rc geninfo_all_blocks=1 00:07:33.481 --rc geninfo_unexecuted_blocks=1 00:07:33.481 00:07:33.481 ' 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:33.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.481 --rc genhtml_branch_coverage=1 00:07:33.481 --rc genhtml_function_coverage=1 00:07:33.481 --rc genhtml_legend=1 00:07:33.481 --rc geninfo_all_blocks=1 00:07:33.481 --rc geninfo_unexecuted_blocks=1 00:07:33.481 00:07:33.481 ' 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.481 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:33.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:33.482 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:40.052 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:40.052 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:40.052 Found net devices under 0000:86:00.0: cvl_0_0 00:07:40.052 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:40.053 Found net devices under 0000:86:00.1: cvl_0_1 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:40.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:07:40.053 00:07:40.053 --- 10.0.0.2 ping statistics --- 00:07:40.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.053 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:40.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:07:40.053 00:07:40.053 --- 10.0.0.1 ping statistics --- 00:07:40.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.053 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1152085 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1152085 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1152085 ']' 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.053 21:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.053 [2024-12-05 21:00:47.521125] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:07:40.053 [2024-12-05 21:00:47.521173] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.053 [2024-12-05 21:00:47.600228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.054 [2024-12-05 21:00:47.647054] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.054 [2024-12-05 21:00:47.647091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.054 [2024-12-05 21:00:47.647098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.054 [2024-12-05 21:00:47.647104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.054 [2024-12-05 21:00:47.647109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.054 [2024-12-05 21:00:47.648583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.054 [2024-12-05 21:00:47.648689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.054 [2024-12-05 21:00:47.648795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.054 [2024-12-05 21:00:47.648796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:40.312 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.312 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:40.312 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:40.312 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:40.312 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.312 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.312 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:40.312 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.312 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.312 [2024-12-05 21:00:48.403530] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.312 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.312 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:40.312 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:40.312 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.312 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:40.312 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.573 Malloc0 00:07:40.573 [2024-12-05 21:00:48.472395] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1152163 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1152163 /var/tmp/bdevperf.sock 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1152163 ']' 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:40.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:40.573 { 00:07:40.573 "params": { 00:07:40.573 "name": "Nvme$subsystem", 00:07:40.573 "trtype": "$TEST_TRANSPORT", 00:07:40.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:40.573 "adrfam": "ipv4", 00:07:40.573 "trsvcid": "$NVMF_PORT", 00:07:40.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:40.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:40.573 "hdgst": ${hdgst:-false}, 00:07:40.573 "ddgst": ${ddgst:-false} 00:07:40.573 }, 00:07:40.573 "method": "bdev_nvme_attach_controller" 00:07:40.573 } 00:07:40.573 EOF 00:07:40.573 )") 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:40.573 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:40.573 "params": { 00:07:40.573 "name": "Nvme0", 00:07:40.573 "trtype": "tcp", 00:07:40.573 "traddr": "10.0.0.2", 00:07:40.573 "adrfam": "ipv4", 00:07:40.573 "trsvcid": "4420", 00:07:40.573 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:40.573 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:40.573 "hdgst": false, 00:07:40.573 "ddgst": false 00:07:40.573 }, 00:07:40.573 "method": "bdev_nvme_attach_controller" 00:07:40.573 }' 00:07:40.573 [2024-12-05 21:00:48.568346] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:07:40.573 [2024-12-05 21:00:48.568397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1152163 ] 00:07:40.573 [2024-12-05 21:00:48.643807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.831 [2024-12-05 21:00:48.685263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.831 Running I/O for 10 seconds... 00:07:41.089 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.089 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:41.089 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:41.089 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.089 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.089 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.089 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:41.089 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:41.089 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:41.089 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:41.089 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:41.089 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:41.089 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:41.089 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:41.089 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:41.089 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:41.090 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.090 21:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.090 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.090 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:07:41.090 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:07:41.090 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:41.349 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:41.349 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:41.349 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:41.350 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:41.350 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.350 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.350 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.350 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:07:41.350 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:07:41.350 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:41.350 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:41.350 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:41.350 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:41.350 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.350 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.350 [2024-12-05 21:00:49.335253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2226b60 is same with the state(6) to be set 00:07:41.350 [2024-12-05 21:00:49.335421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.350 [2024-12-05 21:00:49.335960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.350 [2024-12-05 21:00:49.335966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.335974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.335981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.335989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.335996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.336408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.351 [2024-12-05 21:00:49.336415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.351 [2024-12-05 21:00:49.337382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:41.351 task offset: 104832 on job bdev=Nvme0n1 fails 00:07:41.351 00:07:41.351 Latency(us) 00:07:41.351 [2024-12-05T20:00:49.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.351 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:41.351 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:41.351 Verification LBA range: start 0x0 length 0x400 00:07:41.351 Nvme0n1 : 0.40 1905.56 119.10 158.80 0.00 30180.13 1560.38 27337.87 00:07:41.351 [2024-12-05T20:00:49.459Z] =================================================================================================================== 00:07:41.351 [2024-12-05T20:00:49.459Z] Total : 1905.56 119.10 158.80 0.00 30180.13 1560.38 27337.87 00:07:41.351 [2024-12-05 21:00:49.339762] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.351 [2024-12-05 21:00:49.339781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c70120 (9): Bad file descriptor 00:07:41.351 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.351 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:41.351 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.351 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.351 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.351 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:41.351 [2024-12-05 21:00:49.391554] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:42.287 21:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1152163 00:07:42.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1152163) - No such process 00:07:42.287 21:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:42.287 21:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:42.287 21:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:42.287 21:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:42.287 21:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:42.287 21:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:42.287 21:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:42.287 21:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:42.287 { 00:07:42.287 "params": { 00:07:42.288 "name": "Nvme$subsystem", 00:07:42.288 "trtype": "$TEST_TRANSPORT", 00:07:42.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:42.288 "adrfam": "ipv4", 00:07:42.288 "trsvcid": "$NVMF_PORT", 00:07:42.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:42.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:42.288 "hdgst": ${hdgst:-false}, 00:07:42.288 "ddgst": ${ddgst:-false} 00:07:42.288 }, 00:07:42.288 "method": "bdev_nvme_attach_controller" 00:07:42.288 } 00:07:42.288 EOF 00:07:42.288 )") 00:07:42.288 21:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:42.288 21:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:42.288 21:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:42.288 21:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:42.288 "params": { 00:07:42.288 "name": "Nvme0", 00:07:42.288 "trtype": "tcp", 00:07:42.288 "traddr": "10.0.0.2", 00:07:42.288 "adrfam": "ipv4", 00:07:42.288 "trsvcid": "4420", 00:07:42.288 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:42.288 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:42.288 "hdgst": false, 00:07:42.288 "ddgst": false 00:07:42.288 }, 00:07:42.288 "method": "bdev_nvme_attach_controller" 00:07:42.288 }' 00:07:42.545 [2024-12-05 21:00:50.401782] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:07:42.545 [2024-12-05 21:00:50.401832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1152623 ] 00:07:42.545 [2024-12-05 21:00:50.474773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.545 [2024-12-05 21:00:50.515942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.802 Running I/O for 1 seconds... 00:07:43.736 2010.00 IOPS, 125.62 MiB/s 00:07:43.736 Latency(us) 00:07:43.736 [2024-12-05T20:00:51.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.736 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:43.736 Verification LBA range: start 0x0 length 0x400 00:07:43.736 Nvme0n1 : 1.01 2052.45 128.28 0.00 0.00 30576.58 2808.69 27213.04 00:07:43.736 [2024-12-05T20:00:51.844Z] =================================================================================================================== 00:07:43.736 [2024-12-05T20:00:51.844Z] Total : 2052.45 128.28 0.00 0.00 30576.58 2808.69 27213.04 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:43.995 rmmod nvme_tcp 00:07:43.995 rmmod nvme_fabrics 00:07:43.995 rmmod nvme_keyring 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1152085 ']' 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1152085 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1152085 ']' 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1152085 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.995 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1152085 00:07:43.995 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:43.995 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:43.995 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1152085' 00:07:43.995 killing process with pid 1152085 00:07:43.995 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1152085 00:07:43.995 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1152085 00:07:44.253 [2024-12-05 21:00:52.189939] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:44.253 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:44.253 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:44.253 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:44.253 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:44.253 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:44.253 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:44.253 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:44.253 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:44.253 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:44.253 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.253 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.253 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:46.788 00:07:46.788 real 0m13.113s 00:07:46.788 user 0m22.298s 00:07:46.788 sys 0m5.724s 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.788 ************************************ 00:07:46.788 END TEST nvmf_host_management 00:07:46.788 ************************************ 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:46.788 ************************************ 00:07:46.788 START TEST nvmf_lvol 00:07:46.788 ************************************ 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:46.788 * Looking for test storage... 00:07:46.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:46.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.788 --rc genhtml_branch_coverage=1 00:07:46.788 --rc genhtml_function_coverage=1 00:07:46.788 --rc genhtml_legend=1 00:07:46.788 --rc geninfo_all_blocks=1 00:07:46.788 --rc geninfo_unexecuted_blocks=1 00:07:46.788 00:07:46.788 ' 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:46.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.788 --rc genhtml_branch_coverage=1 00:07:46.788 --rc genhtml_function_coverage=1 00:07:46.788 --rc genhtml_legend=1 00:07:46.788 --rc geninfo_all_blocks=1 00:07:46.788 --rc geninfo_unexecuted_blocks=1 00:07:46.788 00:07:46.788 ' 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:46.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.788 --rc genhtml_branch_coverage=1 00:07:46.788 --rc genhtml_function_coverage=1 00:07:46.788 --rc genhtml_legend=1 00:07:46.788 --rc geninfo_all_blocks=1 00:07:46.788 --rc geninfo_unexecuted_blocks=1 00:07:46.788 00:07:46.788 ' 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:46.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.788 --rc genhtml_branch_coverage=1 00:07:46.788 --rc genhtml_function_coverage=1 00:07:46.788 --rc genhtml_legend=1 00:07:46.788 --rc geninfo_all_blocks=1 00:07:46.788 --rc geninfo_unexecuted_blocks=1 00:07:46.788 00:07:46.788 ' 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.788 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:46.789 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:53.352 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:53.352 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:53.352 Found net devices under 0000:86:00.0: cvl_0_0 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.352 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:53.353 Found net devices under 0000:86:00.1: cvl_0_1 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:53.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:07:53.353 00:07:53.353 --- 10.0.0.2 ping statistics --- 00:07:53.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.353 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:07:53.353 00:07:53.353 --- 10.0.0.1 ping statistics --- 00:07:53.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.353 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1156404 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1156404 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1156404 ']' 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.353 [2024-12-05 21:01:00.663282] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:07:53.353 [2024-12-05 21:01:00.663332] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.353 [2024-12-05 21:01:00.742562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.353 [2024-12-05 21:01:00.784034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.353 [2024-12-05 21:01:00.784072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.353 [2024-12-05 21:01:00.784080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.353 [2024-12-05 21:01:00.784087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.353 [2024-12-05 21:01:00.784092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.353 [2024-12-05 21:01:00.785469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.353 [2024-12-05 21:01:00.785574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.353 [2024-12-05 21:01:00.785575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.353 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:53.353 [2024-12-05 21:01:01.096001] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.353 21:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:53.353 21:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:53.353 21:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:53.612 21:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:53.612 21:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:53.871 21:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:54.129 21:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8b22c8b5-0952-4a99-8dd1-cac386b0f699 00:07:54.129 21:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8b22c8b5-0952-4a99-8dd1-cac386b0f699 lvol 20 00:07:54.129 21:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bf23b1b8-0354-4945-9c64-4e192c21ddb3 00:07:54.129 21:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:54.389 21:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bf23b1b8-0354-4945-9c64-4e192c21ddb3 00:07:54.648 21:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:54.907 [2024-12-05 21:01:02.761455] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.907 21:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:54.907 21:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1156891 00:07:54.907 21:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:54.907 21:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:56.285 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot bf23b1b8-0354-4945-9c64-4e192c21ddb3 MY_SNAPSHOT 00:07:56.285 21:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b2e455fd-618a-4d43-af44-04946c87dab9 00:07:56.285 21:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize bf23b1b8-0354-4945-9c64-4e192c21ddb3 30 00:07:56.545 21:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b2e455fd-618a-4d43-af44-04946c87dab9 MY_CLONE 00:07:56.803 21:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=964731d8-8503-4748-b872-7c38cd46c631 00:07:56.803 21:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 964731d8-8503-4748-b872-7c38cd46c631 00:07:57.370 21:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1156891 00:08:05.485 Initializing NVMe Controllers 00:08:05.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:05.485 Controller IO queue size 128, less than required. 00:08:05.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:05.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:05.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:05.485 Initialization complete. Launching workers. 00:08:05.485 ======================================================== 00:08:05.485 Latency(us) 00:08:05.485 Device Information : IOPS MiB/s Average min max 00:08:05.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12109.20 47.30 10569.00 1728.69 81268.46 00:08:05.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12011.50 46.92 10660.12 3617.71 54939.74 00:08:05.485 ======================================================== 00:08:05.485 Total : 24120.70 94.22 10614.38 1728.69 81268.46 00:08:05.485 00:08:05.485 21:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:05.485 21:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bf23b1b8-0354-4945-9c64-4e192c21ddb3 00:08:05.743 21:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8b22c8b5-0952-4a99-8dd1-cac386b0f699 00:08:06.003 21:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:06.003 21:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:06.003 21:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:06.003 21:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:06.003 21:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:06.003 21:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:06.003 21:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:06.003 21:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.003 21:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:06.003 rmmod nvme_tcp 00:08:06.003 rmmod nvme_fabrics 00:08:06.003 rmmod nvme_keyring 00:08:06.003 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:06.003 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:06.003 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:06.003 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1156404 ']' 00:08:06.003 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1156404 00:08:06.003 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1156404 ']' 00:08:06.003 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1156404 00:08:06.003 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:06.003 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.003 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1156404 00:08:06.263 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.263 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.263 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1156404' 00:08:06.263 killing process with pid 1156404 00:08:06.263 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1156404 00:08:06.263 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1156404 00:08:06.263 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:06.263 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:06.263 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:06.263 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:06.263 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:06.263 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:06.263 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:06.263 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:06.263 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:06.263 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.263 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.263 21:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:08.801 00:08:08.801 real 0m22.035s 00:08:08.801 user 1m3.370s 00:08:08.801 sys 0m7.631s 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:08.801 ************************************ 00:08:08.801 END TEST nvmf_lvol 00:08:08.801 ************************************ 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.801 ************************************ 00:08:08.801 START TEST nvmf_lvs_grow 00:08:08.801 ************************************ 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:08.801 * Looking for test storage... 00:08:08.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:08.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.801 --rc genhtml_branch_coverage=1 00:08:08.801 --rc genhtml_function_coverage=1 00:08:08.801 --rc genhtml_legend=1 00:08:08.801 --rc geninfo_all_blocks=1 00:08:08.801 --rc geninfo_unexecuted_blocks=1 00:08:08.801 00:08:08.801 ' 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:08.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.801 --rc genhtml_branch_coverage=1 00:08:08.801 --rc genhtml_function_coverage=1 00:08:08.801 --rc genhtml_legend=1 00:08:08.801 --rc geninfo_all_blocks=1 00:08:08.801 --rc geninfo_unexecuted_blocks=1 00:08:08.801 00:08:08.801 ' 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:08.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.801 --rc genhtml_branch_coverage=1 00:08:08.801 --rc genhtml_function_coverage=1 00:08:08.801 --rc genhtml_legend=1 00:08:08.801 --rc geninfo_all_blocks=1 00:08:08.801 --rc geninfo_unexecuted_blocks=1 00:08:08.801 00:08:08.801 ' 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:08.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.801 --rc genhtml_branch_coverage=1 00:08:08.801 --rc genhtml_function_coverage=1 00:08:08.801 --rc genhtml_legend=1 00:08:08.801 --rc geninfo_all_blocks=1 00:08:08.801 --rc geninfo_unexecuted_blocks=1 00:08:08.801 00:08:08.801 ' 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.801 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:08.802 21:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:15.366 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:15.366 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:15.366 Found net devices under 0000:86:00.0: cvl_0_0 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:15.366 Found net devices under 0000:86:00.1: cvl_0_1 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.366 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:15.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:08:15.367 00:08:15.367 --- 10.0.0.2 ping statistics --- 00:08:15.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.367 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:08:15.367 00:08:15.367 --- 10.0.0.1 ping statistics --- 00:08:15.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.367 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1162280 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1162280 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1162280 ']' 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.367 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.367 [2024-12-05 21:01:22.808910] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:08:15.367 [2024-12-05 21:01:22.808956] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.367 [2024-12-05 21:01:22.886133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.367 [2024-12-05 21:01:22.926708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.367 [2024-12-05 21:01:22.926743] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.367 [2024-12-05 21:01:22.926750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.367 [2024-12-05 21:01:22.926756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.367 [2024-12-05 21:01:22.926761] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.367 [2024-12-05 21:01:22.927291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:15.367 [2024-12-05 21:01:23.228025] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.367 ************************************ 00:08:15.367 START TEST lvs_grow_clean 00:08:15.367 ************************************ 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:15.367 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:15.625 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:15.625 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:15.625 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a9704fcb-a662-40aa-beb3-9096082f6f59 00:08:15.625 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9704fcb-a662-40aa-beb3-9096082f6f59 00:08:15.625 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:15.884 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:15.884 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:15.884 21:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a9704fcb-a662-40aa-beb3-9096082f6f59 lvol 150 00:08:16.143 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b580e54a-105c-4e1a-a925-083977b0e235 00:08:16.143 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:16.143 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:16.401 [2024-12-05 21:01:24.272030] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:16.401 [2024-12-05 21:01:24.272079] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:16.401 true 00:08:16.401 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9704fcb-a662-40aa-beb3-9096082f6f59 00:08:16.401 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:16.401 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:16.401 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:16.660 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b580e54a-105c-4e1a-a925-083977b0e235 00:08:16.970 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:16.970 [2024-12-05 21:01:24.990169] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.970 21:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:17.250 21:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1162779 00:08:17.250 21:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:17.250 21:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:17.250 21:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1162779 /var/tmp/bdevperf.sock 00:08:17.250 21:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1162779 ']' 00:08:17.250 21:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:17.250 21:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.250 21:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:17.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:17.250 21:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.250 21:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:17.250 [2024-12-05 21:01:25.239461] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:08:17.250 [2024-12-05 21:01:25.239507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162779 ] 00:08:17.250 [2024-12-05 21:01:25.314400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.589 [2024-12-05 21:01:25.355567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.589 21:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.589 21:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:17.589 21:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:17.589 Nvme0n1 00:08:17.847 21:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:17.847 [ 00:08:17.847 { 00:08:17.847 "name": "Nvme0n1", 00:08:17.847 "aliases": [ 00:08:17.847 "b580e54a-105c-4e1a-a925-083977b0e235" 00:08:17.847 ], 00:08:17.847 "product_name": "NVMe disk", 00:08:17.847 "block_size": 4096, 00:08:17.847 "num_blocks": 38912, 00:08:17.847 "uuid": "b580e54a-105c-4e1a-a925-083977b0e235", 00:08:17.847 "numa_id": 1, 00:08:17.847 "assigned_rate_limits": { 00:08:17.847 "rw_ios_per_sec": 0, 00:08:17.847 "rw_mbytes_per_sec": 0, 00:08:17.847 "r_mbytes_per_sec": 0, 00:08:17.847 "w_mbytes_per_sec": 0 00:08:17.847 }, 00:08:17.847 "claimed": false, 00:08:17.847 "zoned": false, 00:08:17.847 "supported_io_types": { 00:08:17.847 "read": true, 00:08:17.847 "write": true, 00:08:17.847 "unmap": true, 00:08:17.847 "flush": true, 00:08:17.847 "reset": true, 00:08:17.847 "nvme_admin": true, 00:08:17.847 "nvme_io": true, 00:08:17.847 "nvme_io_md": false, 00:08:17.847 "write_zeroes": true, 00:08:17.847 "zcopy": false, 00:08:17.847 "get_zone_info": false, 00:08:17.847 "zone_management": false, 00:08:17.847 "zone_append": false, 00:08:17.847 "compare": true, 00:08:17.847 "compare_and_write": true, 00:08:17.847 "abort": true, 00:08:17.847 "seek_hole": false, 00:08:17.847 "seek_data": false, 00:08:17.847 "copy": true, 00:08:17.847 "nvme_iov_md": false 00:08:17.847 }, 00:08:17.847 "memory_domains": [ 00:08:17.847 { 00:08:17.847 "dma_device_id": "system", 00:08:17.847 "dma_device_type": 1 00:08:17.847 } 00:08:17.847 ], 00:08:17.847 "driver_specific": { 00:08:17.847 "nvme": [ 00:08:17.847 { 00:08:17.847 "trid": { 00:08:17.847 "trtype": "TCP", 00:08:17.847 "adrfam": "IPv4", 00:08:17.847 "traddr": "10.0.0.2", 00:08:17.847 "trsvcid": "4420", 00:08:17.847 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:17.847 }, 00:08:17.847 "ctrlr_data": { 00:08:17.847 "cntlid": 1, 00:08:17.847 "vendor_id": "0x8086", 00:08:17.847 "model_number": "SPDK bdev Controller", 00:08:17.847 "serial_number": "SPDK0", 00:08:17.847 "firmware_revision": "25.01", 00:08:17.847 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:17.847 "oacs": { 00:08:17.847 "security": 0, 00:08:17.847 "format": 0, 00:08:17.847 "firmware": 0, 00:08:17.847 "ns_manage": 0 00:08:17.847 }, 00:08:17.847 "multi_ctrlr": true, 00:08:17.847 "ana_reporting": false 00:08:17.847 }, 00:08:17.847 "vs": { 00:08:17.847 "nvme_version": "1.3" 00:08:17.847 }, 00:08:17.847 "ns_data": { 00:08:17.847 "id": 1, 00:08:17.847 "can_share": true 00:08:17.847 } 00:08:17.847 } 00:08:17.847 ], 00:08:17.847 "mp_policy": "active_passive" 00:08:17.847 } 00:08:17.847 } 00:08:17.847 ] 00:08:17.847 21:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1162801 00:08:17.847 21:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:17.847 21:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:18.104 Running I/O for 10 seconds... 00:08:19.037 Latency(us) 00:08:19.037 [2024-12-05T20:01:27.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.037 Nvme0n1 : 1.00 23372.00 91.30 0.00 0.00 0.00 0.00 0.00 00:08:19.037 [2024-12-05T20:01:27.145Z] =================================================================================================================== 00:08:19.037 [2024-12-05T20:01:27.145Z] Total : 23372.00 91.30 0.00 0.00 0.00 0.00 0.00 00:08:19.037 00:08:19.971 21:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a9704fcb-a662-40aa-beb3-9096082f6f59 00:08:19.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.971 Nvme0n1 : 2.00 23532.00 91.92 0.00 0.00 0.00 0.00 0.00 00:08:19.971 [2024-12-05T20:01:28.079Z] =================================================================================================================== 00:08:19.971 [2024-12-05T20:01:28.079Z] Total : 23532.00 91.92 0.00 0.00 0.00 0.00 0.00 00:08:19.971 00:08:20.229 true 00:08:20.229 21:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9704fcb-a662-40aa-beb3-9096082f6f59 00:08:20.229 21:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:20.229 21:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:20.229 21:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:20.229 21:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1162801 00:08:21.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.166 Nvme0n1 : 3.00 23587.00 92.14 0.00 0.00 0.00 0.00 0.00 00:08:21.166 [2024-12-05T20:01:29.274Z] =================================================================================================================== 00:08:21.166 [2024-12-05T20:01:29.274Z] Total : 23587.00 92.14 0.00 0.00 0.00 0.00 0.00 00:08:21.166 00:08:22.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.102 Nvme0n1 : 4.00 23644.50 92.36 0.00 0.00 0.00 0.00 0.00 00:08:22.102 [2024-12-05T20:01:30.210Z] =================================================================================================================== 00:08:22.102 [2024-12-05T20:01:30.210Z] Total : 23644.50 92.36 0.00 0.00 0.00 0.00 0.00 00:08:22.102 00:08:23.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.039 Nvme0n1 : 5.00 23596.80 92.17 0.00 0.00 0.00 0.00 0.00 00:08:23.039 [2024-12-05T20:01:31.147Z] =================================================================================================================== 00:08:23.039 [2024-12-05T20:01:31.147Z] Total : 23596.80 92.17 0.00 0.00 0.00 0.00 0.00 00:08:23.039 00:08:23.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.974 Nvme0n1 : 6.00 23646.83 92.37 0.00 0.00 0.00 0.00 0.00 00:08:23.974 [2024-12-05T20:01:32.082Z] =================================================================================================================== 00:08:23.974 [2024-12-05T20:01:32.082Z] Total : 23646.83 92.37 0.00 0.00 0.00 0.00 0.00 00:08:23.974 00:08:24.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.909 Nvme0n1 : 7.00 23701.71 92.58 0.00 0.00 0.00 0.00 0.00 00:08:24.909 [2024-12-05T20:01:33.017Z] =================================================================================================================== 00:08:24.909 [2024-12-05T20:01:33.017Z] Total : 23701.71 92.58 0.00 0.00 0.00 0.00 0.00 00:08:24.909 00:08:26.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.319 Nvme0n1 : 8.00 23729.50 92.69 0.00 0.00 0.00 0.00 0.00 00:08:26.319 [2024-12-05T20:01:34.427Z] =================================================================================================================== 00:08:26.319 [2024-12-05T20:01:34.427Z] Total : 23729.50 92.69 0.00 0.00 0.00 0.00 0.00 00:08:26.319 00:08:27.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.251 Nvme0n1 : 9.00 23757.56 92.80 0.00 0.00 0.00 0.00 0.00 00:08:27.251 [2024-12-05T20:01:35.359Z] =================================================================================================================== 00:08:27.251 [2024-12-05T20:01:35.359Z] Total : 23757.56 92.80 0.00 0.00 0.00 0.00 0.00 00:08:27.251 00:08:28.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.184 Nvme0n1 : 10.00 23765.50 92.83 0.00 0.00 0.00 0.00 0.00 00:08:28.184 [2024-12-05T20:01:36.292Z] =================================================================================================================== 00:08:28.184 [2024-12-05T20:01:36.292Z] Total : 23765.50 92.83 0.00 0.00 0.00 0.00 0.00 00:08:28.184 00:08:28.184 00:08:28.184 Latency(us) 00:08:28.184 [2024-12-05T20:01:36.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.184 Nvme0n1 : 10.00 23769.73 92.85 0.00 0.00 5381.99 2481.01 14730.00 00:08:28.184 [2024-12-05T20:01:36.292Z] =================================================================================================================== 00:08:28.184 [2024-12-05T20:01:36.292Z] Total : 23769.73 92.85 0.00 0.00 5381.99 2481.01 14730.00 00:08:28.184 { 00:08:28.184 "results": [ 00:08:28.184 { 00:08:28.184 "job": "Nvme0n1", 00:08:28.184 "core_mask": "0x2", 00:08:28.184 "workload": "randwrite", 00:08:28.184 "status": "finished", 00:08:28.184 "queue_depth": 128, 00:08:28.184 "io_size": 4096, 00:08:28.184 "runtime": 10.003605, 00:08:28.184 "iops": 23769.731011970183, 00:08:28.184 "mibps": 92.85051176550853, 00:08:28.184 "io_failed": 0, 00:08:28.184 "io_timeout": 0, 00:08:28.184 "avg_latency_us": 5381.986882750038, 00:08:28.184 "min_latency_us": 2481.0057142857145, 00:08:28.184 "max_latency_us": 14729.996190476191 00:08:28.184 } 00:08:28.184 ], 00:08:28.184 "core_count": 1 00:08:28.184 } 00:08:28.184 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1162779 00:08:28.184 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1162779 ']' 00:08:28.184 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1162779 00:08:28.184 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:28.184 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.184 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1162779 00:08:28.184 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:28.184 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:28.184 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1162779' 00:08:28.184 killing process with pid 1162779 00:08:28.184 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1162779 00:08:28.184 Received shutdown signal, test time was about 10.000000 seconds 00:08:28.184 00:08:28.184 Latency(us) 00:08:28.184 [2024-12-05T20:01:36.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.184 [2024-12-05T20:01:36.292Z] =================================================================================================================== 00:08:28.184 [2024-12-05T20:01:36.292Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:28.184 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1162779 00:08:28.184 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:28.442 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:28.701 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9704fcb-a662-40aa-beb3-9096082f6f59 00:08:28.701 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:28.959 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:28.959 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:28.959 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:28.959 [2024-12-05 21:01:36.977989] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:28.959 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9704fcb-a662-40aa-beb3-9096082f6f59 00:08:28.959 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:28.959 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9704fcb-a662-40aa-beb3-9096082f6f59 00:08:28.959 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:28.959 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.959 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:28.959 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.959 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:28.959 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.960 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:28.960 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:28.960 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9704fcb-a662-40aa-beb3-9096082f6f59 00:08:29.219 request: 00:08:29.219 { 00:08:29.219 "uuid": "a9704fcb-a662-40aa-beb3-9096082f6f59", 00:08:29.219 "method": "bdev_lvol_get_lvstores", 00:08:29.219 "req_id": 1 00:08:29.219 } 00:08:29.219 Got JSON-RPC error response 00:08:29.219 response: 00:08:29.219 { 00:08:29.219 "code": -19, 00:08:29.219 "message": "No such device" 00:08:29.219 } 00:08:29.219 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:29.219 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:29.219 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:29.219 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:29.219 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:29.479 aio_bdev 00:08:29.479 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b580e54a-105c-4e1a-a925-083977b0e235 00:08:29.479 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=b580e54a-105c-4e1a-a925-083977b0e235 00:08:29.479 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.479 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:29.479 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.479 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.479 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:29.479 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b580e54a-105c-4e1a-a925-083977b0e235 -t 2000 00:08:29.738 [ 00:08:29.738 { 00:08:29.738 "name": "b580e54a-105c-4e1a-a925-083977b0e235", 00:08:29.738 "aliases": [ 00:08:29.738 "lvs/lvol" 00:08:29.738 ], 00:08:29.738 "product_name": "Logical Volume", 00:08:29.738 "block_size": 4096, 00:08:29.738 "num_blocks": 38912, 00:08:29.738 "uuid": "b580e54a-105c-4e1a-a925-083977b0e235", 00:08:29.738 "assigned_rate_limits": { 00:08:29.738 "rw_ios_per_sec": 0, 00:08:29.738 "rw_mbytes_per_sec": 0, 00:08:29.738 "r_mbytes_per_sec": 0, 00:08:29.738 "w_mbytes_per_sec": 0 00:08:29.738 }, 00:08:29.738 "claimed": false, 00:08:29.738 "zoned": false, 00:08:29.738 "supported_io_types": { 00:08:29.738 "read": true, 00:08:29.738 "write": true, 00:08:29.738 "unmap": true, 00:08:29.738 "flush": false, 00:08:29.738 "reset": true, 00:08:29.738 "nvme_admin": false, 00:08:29.738 "nvme_io": false, 00:08:29.738 "nvme_io_md": false, 00:08:29.738 "write_zeroes": true, 00:08:29.738 "zcopy": false, 00:08:29.738 "get_zone_info": false, 00:08:29.738 "zone_management": false, 00:08:29.738 "zone_append": false, 00:08:29.738 "compare": false, 00:08:29.739 "compare_and_write": false, 00:08:29.739 "abort": false, 00:08:29.739 "seek_hole": true, 00:08:29.739 "seek_data": true, 00:08:29.739 "copy": false, 00:08:29.739 "nvme_iov_md": false 00:08:29.739 }, 00:08:29.739 "driver_specific": { 00:08:29.739 "lvol": { 00:08:29.739 "lvol_store_uuid": "a9704fcb-a662-40aa-beb3-9096082f6f59", 00:08:29.739 "base_bdev": "aio_bdev", 00:08:29.739 "thin_provision": false, 00:08:29.739 "num_allocated_clusters": 38, 00:08:29.739 "snapshot": false, 00:08:29.739 "clone": false, 00:08:29.739 "esnap_clone": false 00:08:29.739 } 00:08:29.739 } 00:08:29.739 } 00:08:29.739 ] 00:08:29.739 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:29.739 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9704fcb-a662-40aa-beb3-9096082f6f59 00:08:29.739 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:29.997 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:29.997 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9704fcb-a662-40aa-beb3-9096082f6f59 00:08:29.997 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:29.997 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:29.997 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b580e54a-105c-4e1a-a925-083977b0e235 00:08:30.256 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a9704fcb-a662-40aa-beb3-9096082f6f59 00:08:30.513 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:30.770 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:30.770 00:08:30.770 real 0m15.390s 00:08:30.770 user 0m14.985s 00:08:30.770 sys 0m1.458s 00:08:30.770 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.770 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:30.770 ************************************ 00:08:30.770 END TEST lvs_grow_clean 00:08:30.770 ************************************ 00:08:30.770 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:30.770 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:30.770 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.770 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.770 ************************************ 00:08:30.770 START TEST lvs_grow_dirty 00:08:30.770 ************************************ 00:08:30.770 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:30.770 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:30.770 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:30.770 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:30.770 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:30.770 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:30.770 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:30.770 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:30.770 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:30.770 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:31.028 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:31.028 21:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:31.286 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=548808f3-c8b7-4ba6-8c2d-49873be976e0 00:08:31.286 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 548808f3-c8b7-4ba6-8c2d-49873be976e0 00:08:31.286 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:31.286 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:31.286 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:31.286 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 548808f3-c8b7-4ba6-8c2d-49873be976e0 lvol 150 00:08:31.544 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c372d519-331c-4c5e-95d8-1d44022e55ec 00:08:31.544 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:31.544 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:31.802 [2024-12-05 21:01:39.722285] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:31.802 [2024-12-05 21:01:39.722334] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:31.802 true 00:08:31.802 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 548808f3-c8b7-4ba6-8c2d-49873be976e0 00:08:31.802 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:32.061 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:32.061 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:32.061 21:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c372d519-331c-4c5e-95d8-1d44022e55ec 00:08:32.321 21:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:32.580 [2024-12-05 21:01:40.468500] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.580 21:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:32.580 21:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1165388 00:08:32.580 21:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:32.580 21:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:32.580 21:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1165388 /var/tmp/bdevperf.sock 00:08:32.580 21:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1165388 ']' 00:08:32.580 21:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:32.580 21:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.580 21:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:32.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:32.580 21:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.580 21:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:32.839 [2024-12-05 21:01:40.724403] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:08:32.839 [2024-12-05 21:01:40.724450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165388 ] 00:08:32.839 [2024-12-05 21:01:40.797826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.839 [2024-12-05 21:01:40.838597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.839 21:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.839 21:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:32.839 21:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:33.407 Nvme0n1 00:08:33.407 21:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:33.407 [ 00:08:33.407 { 00:08:33.407 "name": "Nvme0n1", 00:08:33.407 "aliases": [ 00:08:33.407 "c372d519-331c-4c5e-95d8-1d44022e55ec" 00:08:33.407 ], 00:08:33.407 "product_name": "NVMe disk", 00:08:33.407 "block_size": 4096, 00:08:33.407 "num_blocks": 38912, 00:08:33.407 "uuid": "c372d519-331c-4c5e-95d8-1d44022e55ec", 00:08:33.407 "numa_id": 1, 00:08:33.407 "assigned_rate_limits": { 00:08:33.407 "rw_ios_per_sec": 0, 00:08:33.407 "rw_mbytes_per_sec": 0, 00:08:33.407 "r_mbytes_per_sec": 0, 00:08:33.407 "w_mbytes_per_sec": 0 00:08:33.407 }, 00:08:33.407 "claimed": false, 00:08:33.407 "zoned": false, 00:08:33.407 "supported_io_types": { 00:08:33.407 "read": true, 00:08:33.407 "write": true, 00:08:33.407 "unmap": true, 00:08:33.407 "flush": true, 00:08:33.407 "reset": true, 00:08:33.407 "nvme_admin": true, 00:08:33.407 "nvme_io": true, 00:08:33.407 "nvme_io_md": false, 00:08:33.407 "write_zeroes": true, 00:08:33.407 "zcopy": false, 00:08:33.407 "get_zone_info": false, 00:08:33.407 "zone_management": false, 00:08:33.407 "zone_append": false, 00:08:33.407 "compare": true, 00:08:33.407 "compare_and_write": true, 00:08:33.407 "abort": true, 00:08:33.407 "seek_hole": false, 00:08:33.407 "seek_data": false, 00:08:33.407 "copy": true, 00:08:33.407 "nvme_iov_md": false 00:08:33.407 }, 00:08:33.407 "memory_domains": [ 00:08:33.407 { 00:08:33.407 "dma_device_id": "system", 00:08:33.407 "dma_device_type": 1 00:08:33.407 } 00:08:33.407 ], 00:08:33.407 "driver_specific": { 00:08:33.407 "nvme": [ 00:08:33.407 { 00:08:33.407 "trid": { 00:08:33.407 "trtype": "TCP", 00:08:33.407 "adrfam": "IPv4", 00:08:33.407 "traddr": "10.0.0.2", 00:08:33.407 "trsvcid": "4420", 00:08:33.407 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:33.407 }, 00:08:33.407 "ctrlr_data": { 00:08:33.407 "cntlid": 1, 00:08:33.407 "vendor_id": "0x8086", 00:08:33.407 "model_number": "SPDK bdev Controller", 00:08:33.407 "serial_number": "SPDK0", 00:08:33.407 "firmware_revision": "25.01", 00:08:33.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:33.407 "oacs": { 00:08:33.407 "security": 0, 00:08:33.407 "format": 0, 00:08:33.407 "firmware": 0, 00:08:33.407 "ns_manage": 0 00:08:33.407 }, 00:08:33.408 "multi_ctrlr": true, 00:08:33.408 "ana_reporting": false 00:08:33.408 }, 00:08:33.408 "vs": { 00:08:33.408 "nvme_version": "1.3" 00:08:33.408 }, 00:08:33.408 "ns_data": { 00:08:33.408 "id": 1, 00:08:33.408 "can_share": true 00:08:33.408 } 00:08:33.408 } 00:08:33.408 ], 00:08:33.408 "mp_policy": "active_passive" 00:08:33.408 } 00:08:33.408 } 00:08:33.408 ] 00:08:33.408 21:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1165528 00:08:33.408 21:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:33.408 21:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:33.408 Running I/O for 10 seconds... 00:08:34.785 Latency(us) 00:08:34.785 [2024-12-05T20:01:42.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.785 Nvme0n1 : 1.00 22502.00 87.90 0.00 0.00 0.00 0.00 0.00 00:08:34.785 [2024-12-05T20:01:42.893Z] =================================================================================================================== 00:08:34.785 [2024-12-05T20:01:42.893Z] Total : 22502.00 87.90 0.00 0.00 0.00 0.00 0.00 00:08:34.785 00:08:35.352 21:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 548808f3-c8b7-4ba6-8c2d-49873be976e0 00:08:35.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.610 Nvme0n1 : 2.00 22651.00 88.48 0.00 0.00 0.00 0.00 0.00 00:08:35.610 [2024-12-05T20:01:43.718Z] =================================================================================================================== 00:08:35.610 [2024-12-05T20:01:43.718Z] Total : 22651.00 88.48 0.00 0.00 0.00 0.00 0.00 00:08:35.610 00:08:35.610 true 00:08:35.610 21:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 548808f3-c8b7-4ba6-8c2d-49873be976e0 00:08:35.610 21:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:35.869 21:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:35.869 21:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:35.869 21:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1165528 00:08:36.436 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.436 Nvme0n1 : 3.00 22692.67 88.64 0.00 0.00 0.00 0.00 0.00 00:08:36.436 [2024-12-05T20:01:44.544Z] =================================================================================================================== 00:08:36.436 [2024-12-05T20:01:44.544Z] Total : 22692.67 88.64 0.00 0.00 0.00 0.00 0.00 00:08:36.436 00:08:37.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.812 Nvme0n1 : 4.00 22761.50 88.91 0.00 0.00 0.00 0.00 0.00 00:08:37.812 [2024-12-05T20:01:45.920Z] =================================================================================================================== 00:08:37.812 [2024-12-05T20:01:45.920Z] Total : 22761.50 88.91 0.00 0.00 0.00 0.00 0.00 00:08:37.812 00:08:38.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.747 Nvme0n1 : 5.00 22809.20 89.10 0.00 0.00 0.00 0.00 0.00 00:08:38.747 [2024-12-05T20:01:46.855Z] =================================================================================================================== 00:08:38.747 [2024-12-05T20:01:46.855Z] Total : 22809.20 89.10 0.00 0.00 0.00 0.00 0.00 00:08:38.747 00:08:39.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.682 Nvme0n1 : 6.00 22849.00 89.25 0.00 0.00 0.00 0.00 0.00 00:08:39.682 [2024-12-05T20:01:47.790Z] =================================================================================================================== 00:08:39.682 [2024-12-05T20:01:47.790Z] Total : 22849.00 89.25 0.00 0.00 0.00 0.00 0.00 00:08:39.682 00:08:40.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.617 Nvme0n1 : 7.00 22878.57 89.37 0.00 0.00 0.00 0.00 0.00 00:08:40.617 [2024-12-05T20:01:48.725Z] =================================================================================================================== 00:08:40.617 [2024-12-05T20:01:48.725Z] Total : 22878.57 89.37 0.00 0.00 0.00 0.00 0.00 00:08:40.617 00:08:41.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.552 Nvme0n1 : 8.00 22904.75 89.47 0.00 0.00 0.00 0.00 0.00 00:08:41.552 [2024-12-05T20:01:49.660Z] =================================================================================================================== 00:08:41.552 [2024-12-05T20:01:49.660Z] Total : 22904.75 89.47 0.00 0.00 0.00 0.00 0.00 00:08:41.552 00:08:42.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.487 Nvme0n1 : 9.00 22908.22 89.49 0.00 0.00 0.00 0.00 0.00 00:08:42.487 [2024-12-05T20:01:50.595Z] =================================================================================================================== 00:08:42.487 [2024-12-05T20:01:50.595Z] Total : 22908.22 89.49 0.00 0.00 0.00 0.00 0.00 00:08:42.487 00:08:43.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.862 Nvme0n1 : 10.00 22928.60 89.56 0.00 0.00 0.00 0.00 0.00 00:08:43.862 [2024-12-05T20:01:51.970Z] =================================================================================================================== 00:08:43.862 [2024-12-05T20:01:51.970Z] Total : 22928.60 89.56 0.00 0.00 0.00 0.00 0.00 00:08:43.862 00:08:43.862 00:08:43.862 Latency(us) 00:08:43.862 [2024-12-05T20:01:51.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.862 Nvme0n1 : 10.01 22928.85 89.57 0.00 0.00 5578.45 4337.86 14917.24 00:08:43.862 [2024-12-05T20:01:51.970Z] =================================================================================================================== 00:08:43.862 [2024-12-05T20:01:51.970Z] Total : 22928.85 89.57 0.00 0.00 5578.45 4337.86 14917.24 00:08:43.862 { 00:08:43.862 "results": [ 00:08:43.862 { 00:08:43.862 "job": "Nvme0n1", 00:08:43.862 "core_mask": "0x2", 00:08:43.862 "workload": "randwrite", 00:08:43.862 "status": "finished", 00:08:43.862 "queue_depth": 128, 00:08:43.862 "io_size": 4096, 00:08:43.862 "runtime": 10.005472, 00:08:43.862 "iops": 22928.853331457027, 00:08:43.862 "mibps": 89.56583332600401, 00:08:43.862 "io_failed": 0, 00:08:43.862 "io_timeout": 0, 00:08:43.862 "avg_latency_us": 5578.450121439842, 00:08:43.862 "min_latency_us": 4337.8590476190475, 00:08:43.862 "max_latency_us": 14917.241904761904 00:08:43.862 } 00:08:43.862 ], 00:08:43.862 "core_count": 1 00:08:43.862 } 00:08:43.862 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1165388 00:08:43.862 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1165388 ']' 00:08:43.862 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1165388 00:08:43.862 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:43.862 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.862 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1165388 00:08:43.862 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:43.862 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:43.862 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1165388' 00:08:43.862 killing process with pid 1165388 00:08:43.862 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1165388 00:08:43.862 Received shutdown signal, test time was about 10.000000 seconds 00:08:43.862 00:08:43.862 Latency(us) 00:08:43.862 [2024-12-05T20:01:51.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.862 [2024-12-05T20:01:51.970Z] =================================================================================================================== 00:08:43.862 [2024-12-05T20:01:51.970Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:43.862 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1165388 00:08:43.862 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:43.862 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:44.122 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 548808f3-c8b7-4ba6-8c2d-49873be976e0 00:08:44.122 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:44.381 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:44.381 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:44.381 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1162280 00:08:44.381 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1162280 00:08:44.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1162280 Killed "${NVMF_APP[@]}" "$@" 00:08:44.381 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:44.381 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:44.381 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:44.381 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:44.381 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:44.381 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1167324 00:08:44.381 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1167324 00:08:44.381 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:44.381 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1167324 ']' 00:08:44.381 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.381 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.381 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.381 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.381 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:44.381 [2024-12-05 21:01:52.448069] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:08:44.381 [2024-12-05 21:01:52.448116] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.677 [2024-12-05 21:01:52.525306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.677 [2024-12-05 21:01:52.565856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.677 [2024-12-05 21:01:52.565894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.677 [2024-12-05 21:01:52.565902] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.677 [2024-12-05 21:01:52.565907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.677 [2024-12-05 21:01:52.565912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.677 [2024-12-05 21:01:52.566470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.677 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.677 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:44.677 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:44.677 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:44.677 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:44.677 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.677 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:44.934 [2024-12-05 21:01:52.861513] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:44.934 [2024-12-05 21:01:52.861609] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:44.934 [2024-12-05 21:01:52.861634] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:44.934 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:44.934 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c372d519-331c-4c5e-95d8-1d44022e55ec 00:08:44.935 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c372d519-331c-4c5e-95d8-1d44022e55ec 00:08:44.935 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:44.935 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:44.935 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:44.935 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:44.935 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:45.193 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c372d519-331c-4c5e-95d8-1d44022e55ec -t 2000 00:08:45.193 [ 00:08:45.193 { 00:08:45.193 "name": "c372d519-331c-4c5e-95d8-1d44022e55ec", 00:08:45.193 "aliases": [ 00:08:45.193 "lvs/lvol" 00:08:45.193 ], 00:08:45.193 "product_name": "Logical Volume", 00:08:45.193 "block_size": 4096, 00:08:45.193 "num_blocks": 38912, 00:08:45.193 "uuid": "c372d519-331c-4c5e-95d8-1d44022e55ec", 00:08:45.193 "assigned_rate_limits": { 00:08:45.193 "rw_ios_per_sec": 0, 00:08:45.193 "rw_mbytes_per_sec": 0, 00:08:45.193 "r_mbytes_per_sec": 0, 00:08:45.193 "w_mbytes_per_sec": 0 00:08:45.193 }, 00:08:45.193 "claimed": false, 00:08:45.193 "zoned": false, 00:08:45.193 "supported_io_types": { 00:08:45.193 "read": true, 00:08:45.193 "write": true, 00:08:45.193 "unmap": true, 00:08:45.193 "flush": false, 00:08:45.193 "reset": true, 00:08:45.193 "nvme_admin": false, 00:08:45.193 "nvme_io": false, 00:08:45.193 "nvme_io_md": false, 00:08:45.193 "write_zeroes": true, 00:08:45.193 "zcopy": false, 00:08:45.193 "get_zone_info": false, 00:08:45.193 "zone_management": false, 00:08:45.193 "zone_append": false, 00:08:45.193 "compare": false, 00:08:45.193 "compare_and_write": false, 00:08:45.193 "abort": false, 00:08:45.193 "seek_hole": true, 00:08:45.193 "seek_data": true, 00:08:45.193 "copy": false, 00:08:45.193 "nvme_iov_md": false 00:08:45.193 }, 00:08:45.193 "driver_specific": { 00:08:45.193 "lvol": { 00:08:45.193 "lvol_store_uuid": "548808f3-c8b7-4ba6-8c2d-49873be976e0", 00:08:45.193 "base_bdev": "aio_bdev", 00:08:45.193 "thin_provision": false, 00:08:45.193 "num_allocated_clusters": 38, 00:08:45.193 "snapshot": false, 00:08:45.193 "clone": false, 00:08:45.193 "esnap_clone": false 00:08:45.193 } 00:08:45.193 } 00:08:45.193 } 00:08:45.193 ] 00:08:45.193 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:45.193 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 548808f3-c8b7-4ba6-8c2d-49873be976e0 00:08:45.193 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:45.451 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:45.451 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 548808f3-c8b7-4ba6-8c2d-49873be976e0 00:08:45.451 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:45.710 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:45.710 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:45.710 [2024-12-05 21:01:53.798319] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:45.968 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 548808f3-c8b7-4ba6-8c2d-49873be976e0 00:08:45.968 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:45.968 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 548808f3-c8b7-4ba6-8c2d-49873be976e0 00:08:45.968 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:45.968 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:45.968 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:45.968 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:45.968 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:45.968 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:45.968 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:45.968 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:45.968 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 548808f3-c8b7-4ba6-8c2d-49873be976e0 00:08:45.968 request: 00:08:45.968 { 00:08:45.968 "uuid": "548808f3-c8b7-4ba6-8c2d-49873be976e0", 00:08:45.969 "method": "bdev_lvol_get_lvstores", 00:08:45.969 "req_id": 1 00:08:45.969 } 00:08:45.969 Got JSON-RPC error response 00:08:45.969 response: 00:08:45.969 { 00:08:45.969 "code": -19, 00:08:45.969 "message": "No such device" 00:08:45.969 } 00:08:45.969 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:45.969 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:45.969 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:45.969 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:45.969 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:46.227 aio_bdev 00:08:46.227 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c372d519-331c-4c5e-95d8-1d44022e55ec 00:08:46.227 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c372d519-331c-4c5e-95d8-1d44022e55ec 00:08:46.227 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.227 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:46.227 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.227 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.227 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:46.485 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c372d519-331c-4c5e-95d8-1d44022e55ec -t 2000 00:08:46.485 [ 00:08:46.485 { 00:08:46.485 "name": "c372d519-331c-4c5e-95d8-1d44022e55ec", 00:08:46.485 "aliases": [ 00:08:46.485 "lvs/lvol" 00:08:46.485 ], 00:08:46.485 "product_name": "Logical Volume", 00:08:46.485 "block_size": 4096, 00:08:46.485 "num_blocks": 38912, 00:08:46.485 "uuid": "c372d519-331c-4c5e-95d8-1d44022e55ec", 00:08:46.485 "assigned_rate_limits": { 00:08:46.485 "rw_ios_per_sec": 0, 00:08:46.485 "rw_mbytes_per_sec": 0, 00:08:46.485 "r_mbytes_per_sec": 0, 00:08:46.485 "w_mbytes_per_sec": 0 00:08:46.485 }, 00:08:46.485 "claimed": false, 00:08:46.485 "zoned": false, 00:08:46.485 "supported_io_types": { 00:08:46.485 "read": true, 00:08:46.485 "write": true, 00:08:46.485 "unmap": true, 00:08:46.485 "flush": false, 00:08:46.485 "reset": true, 00:08:46.485 "nvme_admin": false, 00:08:46.485 "nvme_io": false, 00:08:46.485 "nvme_io_md": false, 00:08:46.485 "write_zeroes": true, 00:08:46.485 "zcopy": false, 00:08:46.485 "get_zone_info": false, 00:08:46.485 "zone_management": false, 00:08:46.485 "zone_append": false, 00:08:46.485 "compare": false, 00:08:46.485 "compare_and_write": false, 00:08:46.485 "abort": false, 00:08:46.485 "seek_hole": true, 00:08:46.485 "seek_data": true, 00:08:46.485 "copy": false, 00:08:46.485 "nvme_iov_md": false 00:08:46.485 }, 00:08:46.485 "driver_specific": { 00:08:46.485 "lvol": { 00:08:46.485 "lvol_store_uuid": "548808f3-c8b7-4ba6-8c2d-49873be976e0", 00:08:46.485 "base_bdev": "aio_bdev", 00:08:46.485 "thin_provision": false, 00:08:46.485 "num_allocated_clusters": 38, 00:08:46.485 "snapshot": false, 00:08:46.485 "clone": false, 00:08:46.485 "esnap_clone": false 00:08:46.485 } 00:08:46.485 } 00:08:46.485 } 00:08:46.485 ] 00:08:46.485 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:46.485 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 548808f3-c8b7-4ba6-8c2d-49873be976e0 00:08:46.485 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:46.744 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:46.744 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 548808f3-c8b7-4ba6-8c2d-49873be976e0 00:08:46.744 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:47.002 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:47.002 21:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c372d519-331c-4c5e-95d8-1d44022e55ec 00:08:47.260 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 548808f3-c8b7-4ba6-8c2d-49873be976e0 00:08:47.260 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:47.518 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:47.518 00:08:47.518 real 0m16.800s 00:08:47.518 user 0m43.086s 00:08:47.518 sys 0m4.033s 00:08:47.518 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.518 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:47.518 ************************************ 00:08:47.518 END TEST lvs_grow_dirty 00:08:47.518 ************************************ 00:08:47.518 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:47.518 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:47.518 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:47.518 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:47.518 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:47.518 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:47.518 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:47.518 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:47.518 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:47.518 nvmf_trace.0 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:47.776 rmmod nvme_tcp 00:08:47.776 rmmod nvme_fabrics 00:08:47.776 rmmod nvme_keyring 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1167324 ']' 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1167324 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1167324 ']' 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1167324 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1167324 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1167324' 00:08:47.776 killing process with pid 1167324 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1167324 00:08:47.776 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1167324 00:08:48.035 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:48.035 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:48.035 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:48.035 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:48.035 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:48.035 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:48.035 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:48.035 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.035 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:48.035 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.035 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.035 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.940 21:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:49.940 00:08:49.940 real 0m41.524s 00:08:49.940 user 1m3.713s 00:08:49.941 sys 0m10.427s 00:08:49.941 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.941 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.941 ************************************ 00:08:49.941 END TEST nvmf_lvs_grow 00:08:49.941 ************************************ 00:08:49.941 21:01:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:49.941 21:01:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:49.941 21:01:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.941 21:01:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.200 ************************************ 00:08:50.200 START TEST nvmf_bdev_io_wait 00:08:50.200 ************************************ 00:08:50.200 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:50.200 * Looking for test storage... 00:08:50.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.200 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:50.200 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:50.200 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:50.200 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:50.200 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.200 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.200 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.200 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.200 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.200 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.200 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:50.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.201 --rc genhtml_branch_coverage=1 00:08:50.201 --rc genhtml_function_coverage=1 00:08:50.201 --rc genhtml_legend=1 00:08:50.201 --rc geninfo_all_blocks=1 00:08:50.201 --rc geninfo_unexecuted_blocks=1 00:08:50.201 00:08:50.201 ' 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:50.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.201 --rc genhtml_branch_coverage=1 00:08:50.201 --rc genhtml_function_coverage=1 00:08:50.201 --rc genhtml_legend=1 00:08:50.201 --rc geninfo_all_blocks=1 00:08:50.201 --rc geninfo_unexecuted_blocks=1 00:08:50.201 00:08:50.201 ' 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:50.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.201 --rc genhtml_branch_coverage=1 00:08:50.201 --rc genhtml_function_coverage=1 00:08:50.201 --rc genhtml_legend=1 00:08:50.201 --rc geninfo_all_blocks=1 00:08:50.201 --rc geninfo_unexecuted_blocks=1 00:08:50.201 00:08:50.201 ' 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:50.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.201 --rc genhtml_branch_coverage=1 00:08:50.201 --rc genhtml_function_coverage=1 00:08:50.201 --rc genhtml_legend=1 00:08:50.201 --rc geninfo_all_blocks=1 00:08:50.201 --rc geninfo_unexecuted_blocks=1 00:08:50.201 00:08:50.201 ' 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:50.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.201 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.202 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:50.202 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:50.202 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:50.202 21:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:56.774 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:56.774 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:56.774 Found net devices under 0000:86:00.0: cvl_0_0 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:56.774 Found net devices under 0000:86:00.1: cvl_0_1 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.774 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.774 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.774 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.774 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:56.774 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.774 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.774 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:56.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:08:56.775 00:08:56.775 --- 10.0.0.2 ping statistics --- 00:08:56.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.775 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:08:56.775 00:08:56.775 --- 10.0.0.1 ping statistics --- 00:08:56.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.775 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1171527 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1171527 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1171527 ']' 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.775 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:56.775 [2024-12-05 21:02:04.336529] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:08:56.775 [2024-12-05 21:02:04.336570] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.775 [2024-12-05 21:02:04.414498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.775 [2024-12-05 21:02:04.460118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.775 [2024-12-05 21:02:04.460151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.775 [2024-12-05 21:02:04.460158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.775 [2024-12-05 21:02:04.460164] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.775 [2024-12-05 21:02:04.460169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.775 [2024-12-05 21:02:04.461720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.775 [2024-12-05 21:02:04.461757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.775 [2024-12-05 21:02:04.461782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.775 [2024-12-05 21:02:04.461783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.341 [2024-12-05 21:02:05.292145] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.341 Malloc0 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.341 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.342 [2024-12-05 21:02:05.347470] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1171776 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1171778 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.342 { 00:08:57.342 "params": { 00:08:57.342 "name": "Nvme$subsystem", 00:08:57.342 "trtype": "$TEST_TRANSPORT", 00:08:57.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.342 "adrfam": "ipv4", 00:08:57.342 "trsvcid": "$NVMF_PORT", 00:08:57.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.342 "hdgst": ${hdgst:-false}, 00:08:57.342 "ddgst": ${ddgst:-false} 00:08:57.342 }, 00:08:57.342 "method": "bdev_nvme_attach_controller" 00:08:57.342 } 00:08:57.342 EOF 00:08:57.342 )") 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1171780 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.342 { 00:08:57.342 "params": { 00:08:57.342 "name": "Nvme$subsystem", 00:08:57.342 "trtype": "$TEST_TRANSPORT", 00:08:57.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.342 "adrfam": "ipv4", 00:08:57.342 "trsvcid": "$NVMF_PORT", 00:08:57.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.342 "hdgst": ${hdgst:-false}, 00:08:57.342 "ddgst": ${ddgst:-false} 00:08:57.342 }, 00:08:57.342 "method": "bdev_nvme_attach_controller" 00:08:57.342 } 00:08:57.342 EOF 00:08:57.342 )") 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1171783 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.342 { 00:08:57.342 "params": { 00:08:57.342 "name": "Nvme$subsystem", 00:08:57.342 "trtype": "$TEST_TRANSPORT", 00:08:57.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.342 "adrfam": "ipv4", 00:08:57.342 "trsvcid": "$NVMF_PORT", 00:08:57.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.342 "hdgst": ${hdgst:-false}, 00:08:57.342 "ddgst": ${ddgst:-false} 00:08:57.342 }, 00:08:57.342 "method": "bdev_nvme_attach_controller" 00:08:57.342 } 00:08:57.342 EOF 00:08:57.342 )") 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.342 { 00:08:57.342 "params": { 00:08:57.342 "name": "Nvme$subsystem", 00:08:57.342 "trtype": "$TEST_TRANSPORT", 00:08:57.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.342 "adrfam": "ipv4", 00:08:57.342 "trsvcid": "$NVMF_PORT", 00:08:57.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.342 "hdgst": ${hdgst:-false}, 00:08:57.342 "ddgst": ${ddgst:-false} 00:08:57.342 }, 00:08:57.342 "method": "bdev_nvme_attach_controller" 00:08:57.342 } 00:08:57.342 EOF 00:08:57.342 )") 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1171776 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.342 "params": { 00:08:57.342 "name": "Nvme1", 00:08:57.342 "trtype": "tcp", 00:08:57.342 "traddr": "10.0.0.2", 00:08:57.342 "adrfam": "ipv4", 00:08:57.342 "trsvcid": "4420", 00:08:57.342 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.342 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.342 "hdgst": false, 00:08:57.342 "ddgst": false 00:08:57.342 }, 00:08:57.342 "method": "bdev_nvme_attach_controller" 00:08:57.342 }' 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.342 "params": { 00:08:57.342 "name": "Nvme1", 00:08:57.342 "trtype": "tcp", 00:08:57.342 "traddr": "10.0.0.2", 00:08:57.342 "adrfam": "ipv4", 00:08:57.342 "trsvcid": "4420", 00:08:57.342 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.342 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.342 "hdgst": false, 00:08:57.342 "ddgst": false 00:08:57.342 }, 00:08:57.342 "method": "bdev_nvme_attach_controller" 00:08:57.342 }' 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.342 "params": { 00:08:57.342 "name": "Nvme1", 00:08:57.342 "trtype": "tcp", 00:08:57.342 "traddr": "10.0.0.2", 00:08:57.342 "adrfam": "ipv4", 00:08:57.342 "trsvcid": "4420", 00:08:57.342 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.342 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.342 "hdgst": false, 00:08:57.342 "ddgst": false 00:08:57.342 }, 00:08:57.342 "method": "bdev_nvme_attach_controller" 00:08:57.342 }' 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:57.342 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.342 "params": { 00:08:57.342 "name": "Nvme1", 00:08:57.342 "trtype": "tcp", 00:08:57.342 "traddr": "10.0.0.2", 00:08:57.342 "adrfam": "ipv4", 00:08:57.342 "trsvcid": "4420", 00:08:57.342 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.342 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.342 "hdgst": false, 00:08:57.342 "ddgst": false 00:08:57.342 }, 00:08:57.342 "method": "bdev_nvme_attach_controller" 00:08:57.342 }' 00:08:57.342 [2024-12-05 21:02:05.399998] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:08:57.342 [2024-12-05 21:02:05.400000] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:08:57.342 [2024-12-05 21:02:05.400047] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-05 21:02:05.400048] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:57.342 --proc-type=auto ] 00:08:57.342 [2024-12-05 21:02:05.403756] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:08:57.342 [2024-12-05 21:02:05.403802] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:57.342 [2024-12-05 21:02:05.409664] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:08:57.343 [2024-12-05 21:02:05.409737] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:57.601 [2024-12-05 21:02:05.588386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.601 [2024-12-05 21:02:05.628915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:57.601 [2024-12-05 21:02:05.680959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.859 [2024-12-05 21:02:05.723416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:57.859 [2024-12-05 21:02:05.782454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.859 [2024-12-05 21:02:05.836030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:57.859 [2024-12-05 21:02:05.842280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.859 [2024-12-05 21:02:05.884764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:57.859 Running I/O for 1 seconds... 00:08:57.859 Running I/O for 1 seconds... 00:08:58.117 Running I/O for 1 seconds... 00:08:58.117 Running I/O for 1 seconds... 00:08:59.054 11951.00 IOPS, 46.68 MiB/s 00:08:59.054 Latency(us) 00:08:59.054 [2024-12-05T20:02:07.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.054 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:59.054 Nvme1n1 : 1.01 11996.09 46.86 0.00 0.00 10632.12 6116.69 15603.81 00:08:59.054 [2024-12-05T20:02:07.162Z] =================================================================================================================== 00:08:59.054 [2024-12-05T20:02:07.162Z] Total : 11996.09 46.86 0.00 0.00 10632.12 6116.69 15603.81 00:08:59.054 11343.00 IOPS, 44.31 MiB/s 00:08:59.054 Latency(us) 00:08:59.054 [2024-12-05T20:02:07.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.054 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:59.054 Nvme1n1 : 1.01 11410.29 44.57 0.00 0.00 11181.32 4493.90 19972.88 00:08:59.054 [2024-12-05T20:02:07.162Z] =================================================================================================================== 00:08:59.054 [2024-12-05T20:02:07.162Z] Total : 11410.29 44.57 0.00 0.00 11181.32 4493.90 19972.88 00:08:59.054 10335.00 IOPS, 40.37 MiB/s 00:08:59.054 Latency(us) 00:08:59.054 [2024-12-05T20:02:07.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.054 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:59.054 Nvme1n1 : 1.01 10419.40 40.70 0.00 0.00 12252.55 3526.46 22843.98 00:08:59.054 [2024-12-05T20:02:07.162Z] =================================================================================================================== 00:08:59.054 [2024-12-05T20:02:07.162Z] Total : 10419.40 40.70 0.00 0.00 12252.55 3526.46 22843.98 00:08:59.054 242552.00 IOPS, 947.47 MiB/s 00:08:59.054 Latency(us) 00:08:59.054 [2024-12-05T20:02:07.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.054 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:59.054 Nvme1n1 : 1.00 242176.63 946.00 0.00 0.00 526.41 222.35 1521.37 00:08:59.054 [2024-12-05T20:02:07.162Z] =================================================================================================================== 00:08:59.054 [2024-12-05T20:02:07.162Z] Total : 242176.63 946.00 0.00 0.00 526.41 222.35 1521.37 00:08:59.054 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1171778 00:08:59.054 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1171780 00:08:59.054 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1171783 00:08:59.054 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.054 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.054 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.054 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.054 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:59.054 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:59.054 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:59.054 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:59.054 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:59.054 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:59.054 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.054 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:59.312 rmmod nvme_tcp 00:08:59.312 rmmod nvme_fabrics 00:08:59.312 rmmod nvme_keyring 00:08:59.312 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:59.312 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:59.312 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:59.312 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1171527 ']' 00:08:59.313 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1171527 00:08:59.313 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1171527 ']' 00:08:59.313 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1171527 00:08:59.313 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:59.313 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.313 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1171527 00:08:59.313 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.313 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.313 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1171527' 00:08:59.313 killing process with pid 1171527 00:08:59.313 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1171527 00:08:59.313 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1171527 00:08:59.572 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:59.572 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:59.572 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:59.572 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:59.572 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:59.572 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:59.572 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:59.572 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:59.572 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:59.572 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.572 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.572 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.475 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:01.475 00:09:01.475 real 0m11.427s 00:09:01.475 user 0m18.648s 00:09:01.475 sys 0m6.208s 00:09:01.475 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.475 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.475 ************************************ 00:09:01.475 END TEST nvmf_bdev_io_wait 00:09:01.475 ************************************ 00:09:01.475 21:02:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:01.475 21:02:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:01.475 21:02:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.475 21:02:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.475 ************************************ 00:09:01.475 START TEST nvmf_queue_depth 00:09:01.475 ************************************ 00:09:01.476 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:01.733 * Looking for test storage... 00:09:01.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.733 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:01.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.734 --rc genhtml_branch_coverage=1 00:09:01.734 --rc genhtml_function_coverage=1 00:09:01.734 --rc genhtml_legend=1 00:09:01.734 --rc geninfo_all_blocks=1 00:09:01.734 --rc geninfo_unexecuted_blocks=1 00:09:01.734 00:09:01.734 ' 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:01.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.734 --rc genhtml_branch_coverage=1 00:09:01.734 --rc genhtml_function_coverage=1 00:09:01.734 --rc genhtml_legend=1 00:09:01.734 --rc geninfo_all_blocks=1 00:09:01.734 --rc geninfo_unexecuted_blocks=1 00:09:01.734 00:09:01.734 ' 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:01.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.734 --rc genhtml_branch_coverage=1 00:09:01.734 --rc genhtml_function_coverage=1 00:09:01.734 --rc genhtml_legend=1 00:09:01.734 --rc geninfo_all_blocks=1 00:09:01.734 --rc geninfo_unexecuted_blocks=1 00:09:01.734 00:09:01.734 ' 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:01.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.734 --rc genhtml_branch_coverage=1 00:09:01.734 --rc genhtml_function_coverage=1 00:09:01.734 --rc genhtml_legend=1 00:09:01.734 --rc geninfo_all_blocks=1 00:09:01.734 --rc geninfo_unexecuted_blocks=1 00:09:01.734 00:09:01.734 ' 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:01.734 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:01.735 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:01.735 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.735 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:01.735 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:01.735 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:01.735 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.735 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.735 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.735 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:01.735 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:01.735 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:01.735 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.474 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.474 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:08.474 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:08.474 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:08.474 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:08.474 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:08.474 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:08.474 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:08.474 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:08.474 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:08.474 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:08.474 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:08.474 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:08.474 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:08.474 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:08.475 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:08.475 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:08.475 Found net devices under 0000:86:00.0: cvl_0_0 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:08.475 Found net devices under 0000:86:00.1: cvl_0_1 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:08.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:09:08.475 00:09:08.475 --- 10.0.0.2 ping statistics --- 00:09:08.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.475 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:09:08.475 00:09:08.475 --- 10.0.0.1 ping statistics --- 00:09:08.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.475 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.475 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1175576 00:09:08.476 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1175576 00:09:08.476 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:08.476 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1175576 ']' 00:09:08.476 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.476 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.476 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.476 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.476 21:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.476 [2024-12-05 21:02:15.858024] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:09:08.476 [2024-12-05 21:02:15.858075] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.476 [2024-12-05 21:02:15.940680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.476 [2024-12-05 21:02:15.979468] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.476 [2024-12-05 21:02:15.979506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.476 [2024-12-05 21:02:15.979513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.476 [2024-12-05 21:02:15.979518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.476 [2024-12-05 21:02:15.979523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.476 [2024-12-05 21:02:15.980073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.476 [2024-12-05 21:02:16.128180] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.476 Malloc0 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.476 [2024-12-05 21:02:16.178604] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1175673 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1175673 /var/tmp/bdevperf.sock 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1175673 ']' 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:08.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.476 [2024-12-05 21:02:16.231274] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:09:08.476 [2024-12-05 21:02:16.231314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1175673 ] 00:09:08.476 [2024-12-05 21:02:16.307072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.476 [2024-12-05 21:02:16.350874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.476 NVMe0n1 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.476 21:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:08.735 Running I/O for 10 seconds... 00:09:10.605 11912.00 IOPS, 46.53 MiB/s [2024-12-05T20:02:20.089Z] 12250.00 IOPS, 47.85 MiB/s [2024-12-05T20:02:21.025Z] 12273.67 IOPS, 47.94 MiB/s [2024-12-05T20:02:21.962Z] 12283.75 IOPS, 47.98 MiB/s [2024-12-05T20:02:22.899Z] 12355.40 IOPS, 48.26 MiB/s [2024-12-05T20:02:23.836Z] 12422.33 IOPS, 48.52 MiB/s [2024-12-05T20:02:24.773Z] 12413.29 IOPS, 48.49 MiB/s [2024-12-05T20:02:25.710Z] 12432.88 IOPS, 48.57 MiB/s [2024-12-05T20:02:27.086Z] 12487.67 IOPS, 48.78 MiB/s [2024-12-05T20:02:27.086Z] 12497.60 IOPS, 48.82 MiB/s 00:09:18.978 Latency(us) 00:09:18.978 [2024-12-05T20:02:27.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.978 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:18.978 Verification LBA range: start 0x0 length 0x4000 00:09:18.978 NVMe0n1 : 10.05 12530.50 48.95 0.00 0.00 81438.02 11359.57 52678.46 00:09:18.978 [2024-12-05T20:02:27.086Z] =================================================================================================================== 00:09:18.978 [2024-12-05T20:02:27.086Z] Total : 12530.50 48.95 0.00 0.00 81438.02 11359.57 52678.46 00:09:18.978 { 00:09:18.978 "results": [ 00:09:18.978 { 00:09:18.978 "job": "NVMe0n1", 00:09:18.978 "core_mask": "0x1", 00:09:18.978 "workload": "verify", 00:09:18.978 "status": "finished", 00:09:18.978 "verify_range": { 00:09:18.978 "start": 0, 00:09:18.978 "length": 16384 00:09:18.978 }, 00:09:18.978 "queue_depth": 1024, 00:09:18.978 "io_size": 4096, 00:09:18.978 "runtime": 10.050519, 00:09:18.978 "iops": 12530.497181289842, 00:09:18.978 "mibps": 48.947254614413445, 00:09:18.978 "io_failed": 0, 00:09:18.978 "io_timeout": 0, 00:09:18.978 "avg_latency_us": 81438.02320555315, 00:09:18.978 "min_latency_us": 11359.573333333334, 00:09:18.978 "max_latency_us": 52678.460952380956 00:09:18.978 } 00:09:18.978 ], 00:09:18.978 "core_count": 1 00:09:18.978 } 00:09:18.978 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1175673 00:09:18.978 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1175673 ']' 00:09:18.978 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1175673 00:09:18.978 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:18.978 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.978 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1175673 00:09:18.978 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.978 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.978 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1175673' 00:09:18.978 killing process with pid 1175673 00:09:18.978 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1175673 00:09:18.978 Received shutdown signal, test time was about 10.000000 seconds 00:09:18.978 00:09:18.978 Latency(us) 00:09:18.978 [2024-12-05T20:02:27.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.979 [2024-12-05T20:02:27.087Z] =================================================================================================================== 00:09:18.979 [2024-12-05T20:02:27.087Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:18.979 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1175673 00:09:18.979 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:18.979 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:18.979 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:18.979 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:18.979 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:18.979 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:18.979 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:18.979 21:02:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:18.979 rmmod nvme_tcp 00:09:18.979 rmmod nvme_fabrics 00:09:18.979 rmmod nvme_keyring 00:09:18.979 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:18.979 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:18.979 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:18.979 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1175576 ']' 00:09:18.979 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1175576 00:09:18.979 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1175576 ']' 00:09:18.979 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1175576 00:09:18.979 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:18.979 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.979 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1175576 00:09:19.237 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:19.237 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:19.237 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1175576' 00:09:19.237 killing process with pid 1175576 00:09:19.237 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1175576 00:09:19.237 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1175576 00:09:19.237 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.237 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.237 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.237 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:19.237 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:19.237 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.237 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.237 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.237 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.237 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.237 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.237 21:02:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:21.773 00:09:21.773 real 0m19.787s 00:09:21.773 user 0m23.103s 00:09:21.773 sys 0m6.059s 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.773 ************************************ 00:09:21.773 END TEST nvmf_queue_depth 00:09:21.773 ************************************ 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.773 ************************************ 00:09:21.773 START TEST nvmf_target_multipath 00:09:21.773 ************************************ 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:21.773 * Looking for test storage... 00:09:21.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.773 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:21.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.774 --rc genhtml_branch_coverage=1 00:09:21.774 --rc genhtml_function_coverage=1 00:09:21.774 --rc genhtml_legend=1 00:09:21.774 --rc geninfo_all_blocks=1 00:09:21.774 --rc geninfo_unexecuted_blocks=1 00:09:21.774 00:09:21.774 ' 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:21.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.774 --rc genhtml_branch_coverage=1 00:09:21.774 --rc genhtml_function_coverage=1 00:09:21.774 --rc genhtml_legend=1 00:09:21.774 --rc geninfo_all_blocks=1 00:09:21.774 --rc geninfo_unexecuted_blocks=1 00:09:21.774 00:09:21.774 ' 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:21.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.774 --rc genhtml_branch_coverage=1 00:09:21.774 --rc genhtml_function_coverage=1 00:09:21.774 --rc genhtml_legend=1 00:09:21.774 --rc geninfo_all_blocks=1 00:09:21.774 --rc geninfo_unexecuted_blocks=1 00:09:21.774 00:09:21.774 ' 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:21.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.774 --rc genhtml_branch_coverage=1 00:09:21.774 --rc genhtml_function_coverage=1 00:09:21.774 --rc genhtml_legend=1 00:09:21.774 --rc geninfo_all_blocks=1 00:09:21.774 --rc geninfo_unexecuted_blocks=1 00:09:21.774 00:09:21.774 ' 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:21.774 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:21.775 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:21.775 21:02:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:28.367 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:28.368 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:28.368 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:28.368 Found net devices under 0000:86:00.0: cvl_0_0 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:28.368 Found net devices under 0000:86:00.1: cvl_0_1 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:28.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:28.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:09:28.368 00:09:28.368 --- 10.0.0.2 ping statistics --- 00:09:28.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.368 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:28.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:28.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:09:28.368 00:09:28.368 --- 10.0.0.1 ping statistics --- 00:09:28.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.368 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:28.368 only one NIC for nvmf test 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:28.368 rmmod nvme_tcp 00:09:28.368 rmmod nvme_fabrics 00:09:28.368 rmmod nvme_keyring 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.368 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.369 21:02:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:29.746 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:29.747 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:29.747 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:29.747 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:29.747 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.747 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.747 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.747 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:29.747 00:09:29.747 real 0m8.397s 00:09:29.747 user 0m1.778s 00:09:29.747 sys 0m4.610s 00:09:29.747 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.747 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:29.747 ************************************ 00:09:29.747 END TEST nvmf_target_multipath 00:09:29.747 ************************************ 00:09:30.006 21:02:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:30.006 21:02:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:30.006 21:02:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.006 21:02:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.007 ************************************ 00:09:30.007 START TEST nvmf_zcopy 00:09:30.007 ************************************ 00:09:30.007 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:30.007 * Looking for test storage... 00:09:30.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.007 21:02:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:30.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.007 --rc genhtml_branch_coverage=1 00:09:30.007 --rc genhtml_function_coverage=1 00:09:30.007 --rc genhtml_legend=1 00:09:30.007 --rc geninfo_all_blocks=1 00:09:30.007 --rc geninfo_unexecuted_blocks=1 00:09:30.007 00:09:30.007 ' 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:30.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.007 --rc genhtml_branch_coverage=1 00:09:30.007 --rc genhtml_function_coverage=1 00:09:30.007 --rc genhtml_legend=1 00:09:30.007 --rc geninfo_all_blocks=1 00:09:30.007 --rc geninfo_unexecuted_blocks=1 00:09:30.007 00:09:30.007 ' 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:30.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.007 --rc genhtml_branch_coverage=1 00:09:30.007 --rc genhtml_function_coverage=1 00:09:30.007 --rc genhtml_legend=1 00:09:30.007 --rc geninfo_all_blocks=1 00:09:30.007 --rc geninfo_unexecuted_blocks=1 00:09:30.007 00:09:30.007 ' 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:30.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.007 --rc genhtml_branch_coverage=1 00:09:30.007 --rc genhtml_function_coverage=1 00:09:30.007 --rc genhtml_legend=1 00:09:30.007 --rc geninfo_all_blocks=1 00:09:30.007 --rc geninfo_unexecuted_blocks=1 00:09:30.007 00:09:30.007 ' 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.007 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.267 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:30.267 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:30.267 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.267 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:30.267 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:30.267 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:30.267 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.267 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.267 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.267 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:30.267 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:30.267 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:30.267 21:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:36.840 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:36.840 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:36.840 Found net devices under 0000:86:00.0: cvl_0_0 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:36.840 Found net devices under 0000:86:00.1: cvl_0_1 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:36.840 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:36.840 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:36.840 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:36.840 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:36.840 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:36.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:09:36.841 00:09:36.841 --- 10.0.0.2 ping statistics --- 00:09:36.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.841 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:36.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:09:36.841 00:09:36.841 --- 10.0.0.1 ping statistics --- 00:09:36.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.841 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1184511 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1184511 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1184511 ']' 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.841 [2024-12-05 21:02:44.164579] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:09:36.841 [2024-12-05 21:02:44.164629] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.841 [2024-12-05 21:02:44.244318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.841 [2024-12-05 21:02:44.286556] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.841 [2024-12-05 21:02:44.286593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.841 [2024-12-05 21:02:44.286602] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.841 [2024-12-05 21:02:44.286613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.841 [2024-12-05 21:02:44.286620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.841 [2024-12-05 21:02:44.287193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.841 [2024-12-05 21:02:44.436563] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.841 [2024-12-05 21:02:44.456776] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.841 malloc0 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:36.841 { 00:09:36.841 "params": { 00:09:36.841 "name": "Nvme$subsystem", 00:09:36.841 "trtype": "$TEST_TRANSPORT", 00:09:36.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:36.841 "adrfam": "ipv4", 00:09:36.841 "trsvcid": "$NVMF_PORT", 00:09:36.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:36.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:36.841 "hdgst": ${hdgst:-false}, 00:09:36.841 "ddgst": ${ddgst:-false} 00:09:36.841 }, 00:09:36.841 "method": "bdev_nvme_attach_controller" 00:09:36.841 } 00:09:36.841 EOF 00:09:36.841 )") 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:36.841 21:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:36.841 "params": { 00:09:36.841 "name": "Nvme1", 00:09:36.841 "trtype": "tcp", 00:09:36.841 "traddr": "10.0.0.2", 00:09:36.841 "adrfam": "ipv4", 00:09:36.841 "trsvcid": "4420", 00:09:36.841 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:36.841 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:36.841 "hdgst": false, 00:09:36.841 "ddgst": false 00:09:36.841 }, 00:09:36.841 "method": "bdev_nvme_attach_controller" 00:09:36.841 }' 00:09:36.841 [2024-12-05 21:02:44.543378] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:09:36.841 [2024-12-05 21:02:44.543420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184738 ] 00:09:36.841 [2024-12-05 21:02:44.617357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.841 [2024-12-05 21:02:44.658005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.100 Running I/O for 10 seconds... 00:09:38.970 8551.00 IOPS, 66.80 MiB/s [2024-12-05T20:02:48.014Z] 8706.50 IOPS, 68.02 MiB/s [2024-12-05T20:02:49.390Z] 8737.00 IOPS, 68.26 MiB/s [2024-12-05T20:02:50.325Z] 8764.00 IOPS, 68.47 MiB/s [2024-12-05T20:02:51.308Z] 8787.60 IOPS, 68.65 MiB/s [2024-12-05T20:02:52.242Z] 8774.00 IOPS, 68.55 MiB/s [2024-12-05T20:02:53.178Z] 8782.57 IOPS, 68.61 MiB/s [2024-12-05T20:02:54.115Z] 8783.75 IOPS, 68.62 MiB/s [2024-12-05T20:02:55.054Z] 8785.33 IOPS, 68.64 MiB/s [2024-12-05T20:02:55.054Z] 8788.10 IOPS, 68.66 MiB/s 00:09:46.946 Latency(us) 00:09:46.946 [2024-12-05T20:02:55.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.946 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:46.946 Verification LBA range: start 0x0 length 0x1000 00:09:46.946 Nvme1n1 : 10.01 8794.15 68.70 0.00 0.00 14513.94 663.16 23717.79 00:09:46.946 [2024-12-05T20:02:55.054Z] =================================================================================================================== 00:09:46.946 [2024-12-05T20:02:55.054Z] Total : 8794.15 68.70 0.00 0.00 14513.94 663.16 23717.79 00:09:47.211 21:02:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1186368 00:09:47.211 21:02:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:47.211 21:02:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.211 21:02:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:47.211 21:02:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:47.211 21:02:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:47.211 21:02:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:47.211 21:02:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:47.211 21:02:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:47.211 { 00:09:47.211 "params": { 00:09:47.211 "name": "Nvme$subsystem", 00:09:47.211 "trtype": "$TEST_TRANSPORT", 00:09:47.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:47.211 "adrfam": "ipv4", 00:09:47.211 "trsvcid": "$NVMF_PORT", 00:09:47.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:47.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:47.211 "hdgst": ${hdgst:-false}, 00:09:47.211 "ddgst": ${ddgst:-false} 00:09:47.211 }, 00:09:47.211 "method": "bdev_nvme_attach_controller" 00:09:47.211 } 00:09:47.211 EOF 00:09:47.211 )") 00:09:47.211 21:02:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:47.211 [2024-12-05 21:02:55.174821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.211 [2024-12-05 21:02:55.174853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.211 21:02:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:47.211 21:02:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:47.211 21:02:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:47.211 "params": { 00:09:47.211 "name": "Nvme1", 00:09:47.211 "trtype": "tcp", 00:09:47.211 "traddr": "10.0.0.2", 00:09:47.211 "adrfam": "ipv4", 00:09:47.211 "trsvcid": "4420", 00:09:47.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:47.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:47.211 "hdgst": false, 00:09:47.211 "ddgst": false 00:09:47.211 }, 00:09:47.211 "method": "bdev_nvme_attach_controller" 00:09:47.211 }' 00:09:47.211 [2024-12-05 21:02:55.186814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.211 [2024-12-05 21:02:55.186828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.211 [2024-12-05 21:02:55.198848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.211 [2024-12-05 21:02:55.198864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.211 [2024-12-05 21:02:55.210872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.211 [2024-12-05 21:02:55.210882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.211 [2024-12-05 21:02:55.214568] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:09:47.211 [2024-12-05 21:02:55.214608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186368 ] 00:09:47.211 [2024-12-05 21:02:55.222903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.211 [2024-12-05 21:02:55.222915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.211 [2024-12-05 21:02:55.234933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.211 [2024-12-05 21:02:55.234943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.211 [2024-12-05 21:02:55.246964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.211 [2024-12-05 21:02:55.246975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.211 [2024-12-05 21:02:55.258996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.211 [2024-12-05 21:02:55.259006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.211 [2024-12-05 21:02:55.271030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.211 [2024-12-05 21:02:55.271042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.211 [2024-12-05 21:02:55.283060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.211 [2024-12-05 21:02:55.283071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.211 [2024-12-05 21:02:55.288860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.211 [2024-12-05 21:02:55.295096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.211 [2024-12-05 21:02:55.295107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.211 [2024-12-05 21:02:55.307126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.211 [2024-12-05 21:02:55.307141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.319160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.319173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.331191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.331201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.333570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.470 [2024-12-05 21:02:55.343226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.343239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.355267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.355285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.367293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.367307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.379321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.379333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.391353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.391365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.403388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.403400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.415420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.415430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.427463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.427483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.439488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.439502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.451520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.451535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.463554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.463571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.475582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.475592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.487621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.487639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 Running I/O for 5 seconds... 00:09:47.470 [2024-12-05 21:02:55.502639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.502660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.516951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.516978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.531098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.531116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.541658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.541676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.556277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.556296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.470 [2024-12-05 21:02:55.566932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.470 [2024-12-05 21:02:55.566950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.581536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.581554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.595845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.595863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.606378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.606411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.620275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.620293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.634040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.634059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.647683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.647701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.661395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.661413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.675288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.675307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.688721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.688739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.702765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.702783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.716242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.716260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.730385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.730403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.743878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.743896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.757828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.757846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.771261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.771279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.784923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.784941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.798480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.798497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.811859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.811876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.729 [2024-12-05 21:02:55.826115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.729 [2024-12-05 21:02:55.826133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.987 [2024-12-05 21:02:55.839701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.987 [2024-12-05 21:02:55.839719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.987 [2024-12-05 21:02:55.853756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.987 [2024-12-05 21:02:55.853774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.987 [2024-12-05 21:02:55.867364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.987 [2024-12-05 21:02:55.867389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.987 [2024-12-05 21:02:55.881919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.987 [2024-12-05 21:02:55.881936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.987 [2024-12-05 21:02:55.896993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.988 [2024-12-05 21:02:55.897011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.988 [2024-12-05 21:02:55.911388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.988 [2024-12-05 21:02:55.911406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.988 [2024-12-05 21:02:55.925060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.988 [2024-12-05 21:02:55.925078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.988 [2024-12-05 21:02:55.939145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.988 [2024-12-05 21:02:55.939163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.988 [2024-12-05 21:02:55.953175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.988 [2024-12-05 21:02:55.953193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.988 [2024-12-05 21:02:55.967828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.988 [2024-12-05 21:02:55.967846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.988 [2024-12-05 21:02:55.981729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.988 [2024-12-05 21:02:55.981747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.988 [2024-12-05 21:02:55.995285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.988 [2024-12-05 21:02:55.995302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.988 [2024-12-05 21:02:56.009056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.988 [2024-12-05 21:02:56.009074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.988 [2024-12-05 21:02:56.023123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.988 [2024-12-05 21:02:56.023141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.988 [2024-12-05 21:02:56.037332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.988 [2024-12-05 21:02:56.037349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.988 [2024-12-05 21:02:56.051416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.988 [2024-12-05 21:02:56.051433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.988 [2024-12-05 21:02:56.065085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.988 [2024-12-05 21:02:56.065113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.988 [2024-12-05 21:02:56.078559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.988 [2024-12-05 21:02:56.078577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.988 [2024-12-05 21:02:56.092419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.988 [2024-12-05 21:02:56.092437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.246 [2024-12-05 21:02:56.106231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.246 [2024-12-05 21:02:56.106249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.246 [2024-12-05 21:02:56.119951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.246 [2024-12-05 21:02:56.119969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.246 [2024-12-05 21:02:56.133615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.246 [2024-12-05 21:02:56.133633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.246 [2024-12-05 21:02:56.147835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.246 [2024-12-05 21:02:56.147853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.246 [2024-12-05 21:02:56.161712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.246 [2024-12-05 21:02:56.161731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.246 [2024-12-05 21:02:56.175338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.246 [2024-12-05 21:02:56.175358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.246 [2024-12-05 21:02:56.189359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.246 [2024-12-05 21:02:56.189385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.246 [2024-12-05 21:02:56.202915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.246 [2024-12-05 21:02:56.202934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.246 [2024-12-05 21:02:56.216485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.246 [2024-12-05 21:02:56.216504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.246 [2024-12-05 21:02:56.229903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.246 [2024-12-05 21:02:56.229927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.246 [2024-12-05 21:02:56.244125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.246 [2024-12-05 21:02:56.244143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.246 [2024-12-05 21:02:56.258149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.246 [2024-12-05 21:02:56.258167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.246 [2024-12-05 21:02:56.271804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.246 [2024-12-05 21:02:56.271821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.246 [2024-12-05 21:02:56.285405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.246 [2024-12-05 21:02:56.285423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.246 [2024-12-05 21:02:56.299472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.246 [2024-12-05 21:02:56.299490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.246 [2024-12-05 21:02:56.313385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.246 [2024-12-05 21:02:56.313403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.246 [2024-12-05 21:02:56.327233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.246 [2024-12-05 21:02:56.327252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.246 [2024-12-05 21:02:56.340809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.246 [2024-12-05 21:02:56.340828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 [2024-12-05 21:02:56.354328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.354347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 [2024-12-05 21:02:56.368112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.368131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 [2024-12-05 21:02:56.381193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.381211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 [2024-12-05 21:02:56.395766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.395783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 [2024-12-05 21:02:56.411396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.411414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 [2024-12-05 21:02:56.425325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.425343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 [2024-12-05 21:02:56.438621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.438640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 [2024-12-05 21:02:56.452708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.452726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 [2024-12-05 21:02:56.467035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.467053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 [2024-12-05 21:02:56.477622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.477639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 [2024-12-05 21:02:56.491597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.491615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 16811.00 IOPS, 131.34 MiB/s [2024-12-05T20:02:56.613Z] [2024-12-05 21:02:56.505679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.505697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 [2024-12-05 21:02:56.519611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.519630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 [2024-12-05 21:02:56.533341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.533359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 [2024-12-05 21:02:56.547282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.547301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 [2024-12-05 21:02:56.560525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.560544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 [2024-12-05 21:02:56.574429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.574453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 [2024-12-05 21:02:56.588303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.588323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.505 [2024-12-05 21:02:56.602411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.505 [2024-12-05 21:02:56.602429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.616179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.616198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.630175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.630194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.644040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.644060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.658321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.658340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.669908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.669927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.684072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.684091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.697966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.697984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.711525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.711543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.725353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.725380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.739025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.739043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.752505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.752524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.766256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.766275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.780226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.780245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.793981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.794001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.807652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.807670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.821841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.821860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.833057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.833080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.847270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.847289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.763 [2024-12-05 21:02:56.861058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.763 [2024-12-05 21:02:56.861077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:56.874981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:56.875000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:56.888819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:56.888837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:56.902641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:56.902662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:56.916167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:56.916185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:56.930008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:56.930026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:56.944091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:56.944108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:56.957763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:56.957781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:56.971333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:56.971351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:56.984953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:56.984972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:56.998514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:56.998532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:57.012650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:57.012668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:57.024065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:57.024083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:57.039027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:57.039045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:57.050271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:57.050289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:57.064551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:57.064569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:57.078422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:57.078440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:57.092156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:57.092180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:57.106099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:57.106118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.023 [2024-12-05 21:02:57.120008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.023 [2024-12-05 21:02:57.120027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.282 [2024-12-05 21:02:57.133876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.282 [2024-12-05 21:02:57.133894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.282 [2024-12-05 21:02:57.147509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.282 [2024-12-05 21:02:57.147527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.282 [2024-12-05 21:02:57.161174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.282 [2024-12-05 21:02:57.161192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.282 [2024-12-05 21:02:57.174976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.283 [2024-12-05 21:02:57.174994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.283 [2024-12-05 21:02:57.189003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.283 [2024-12-05 21:02:57.189023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.283 [2024-12-05 21:02:57.202982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.283 [2024-12-05 21:02:57.203004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.283 [2024-12-05 21:02:57.216576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.283 [2024-12-05 21:02:57.216594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.283 [2024-12-05 21:02:57.230417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.283 [2024-12-05 21:02:57.230435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.283 [2024-12-05 21:02:57.244342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.283 [2024-12-05 21:02:57.244360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.283 [2024-12-05 21:02:57.258452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.283 [2024-12-05 21:02:57.258470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.283 [2024-12-05 21:02:57.269256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.283 [2024-12-05 21:02:57.269273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.283 [2024-12-05 21:02:57.283387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.283 [2024-12-05 21:02:57.283405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.283 [2024-12-05 21:02:57.296824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.283 [2024-12-05 21:02:57.296841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.283 [2024-12-05 21:02:57.310104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.283 [2024-12-05 21:02:57.310121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.283 [2024-12-05 21:02:57.323779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.283 [2024-12-05 21:02:57.323797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.283 [2024-12-05 21:02:57.337383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.283 [2024-12-05 21:02:57.337401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.283 [2024-12-05 21:02:57.351251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.283 [2024-12-05 21:02:57.351269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.283 [2024-12-05 21:02:57.364716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.283 [2024-12-05 21:02:57.364734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.283 [2024-12-05 21:02:57.378343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.283 [2024-12-05 21:02:57.378362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 [2024-12-05 21:02:57.391932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.391950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 [2024-12-05 21:02:57.405440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.405458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 [2024-12-05 21:02:57.419063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.419081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 [2024-12-05 21:02:57.432572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.432589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 [2024-12-05 21:02:57.446223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.446241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 [2024-12-05 21:02:57.459979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.459997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 [2024-12-05 21:02:57.473720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.473738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 [2024-12-05 21:02:57.487711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.487729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 16903.00 IOPS, 132.05 MiB/s [2024-12-05T20:02:57.650Z] [2024-12-05 21:02:57.501755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.501773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 [2024-12-05 21:02:57.515650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.515668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 [2024-12-05 21:02:57.529385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.529403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 [2024-12-05 21:02:57.543230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.543248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 [2024-12-05 21:02:57.557188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.557206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 [2024-12-05 21:02:57.571054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.571071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 [2024-12-05 21:02:57.584618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.584636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 [2024-12-05 21:02:57.598100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.598118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 [2024-12-05 21:02:57.612272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.612290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 [2024-12-05 21:02:57.625759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.625778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.542 [2024-12-05 21:02:57.639397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.542 [2024-12-05 21:02:57.639415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.801 [2024-12-05 21:02:57.653297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.801 [2024-12-05 21:02:57.653315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.801 [2024-12-05 21:02:57.667334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.801 [2024-12-05 21:02:57.667353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.801 [2024-12-05 21:02:57.681020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.801 [2024-12-05 21:02:57.681037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.801 [2024-12-05 21:02:57.694714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.801 [2024-12-05 21:02:57.694732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.801 [2024-12-05 21:02:57.708096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.801 [2024-12-05 21:02:57.708115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.801 [2024-12-05 21:02:57.721587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.801 [2024-12-05 21:02:57.721605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.801 [2024-12-05 21:02:57.735288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.801 [2024-12-05 21:02:57.735306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.801 [2024-12-05 21:02:57.749110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.801 [2024-12-05 21:02:57.749128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.801 [2024-12-05 21:02:57.762858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.801 [2024-12-05 21:02:57.762876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.801 [2024-12-05 21:02:57.776850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.801 [2024-12-05 21:02:57.776868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.801 [2024-12-05 21:02:57.790696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.801 [2024-12-05 21:02:57.790714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.801 [2024-12-05 21:02:57.804693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.801 [2024-12-05 21:02:57.804711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.801 [2024-12-05 21:02:57.818535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.802 [2024-12-05 21:02:57.818553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.802 [2024-12-05 21:02:57.830357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.802 [2024-12-05 21:02:57.830380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.802 [2024-12-05 21:02:57.844246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.802 [2024-12-05 21:02:57.844264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.802 [2024-12-05 21:02:57.857976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.802 [2024-12-05 21:02:57.857999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.802 [2024-12-05 21:02:57.871809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.802 [2024-12-05 21:02:57.871828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.802 [2024-12-05 21:02:57.885785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.802 [2024-12-05 21:02:57.885803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.802 [2024-12-05 21:02:57.899672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.802 [2024-12-05 21:02:57.899690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.060 [2024-12-05 21:02:57.913497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.060 [2024-12-05 21:02:57.913519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.060 [2024-12-05 21:02:57.927158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.060 [2024-12-05 21:02:57.927178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.060 [2024-12-05 21:02:57.940834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.060 [2024-12-05 21:02:57.940854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.061 [2024-12-05 21:02:57.954540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.061 [2024-12-05 21:02:57.954560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.061 [2024-12-05 21:02:57.968111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.061 [2024-12-05 21:02:57.968130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.061 [2024-12-05 21:02:57.982165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.061 [2024-12-05 21:02:57.982184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.061 [2024-12-05 21:02:57.995752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.061 [2024-12-05 21:02:57.995770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.061 [2024-12-05 21:02:58.009468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.061 [2024-12-05 21:02:58.009487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.061 [2024-12-05 21:02:58.023305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.061 [2024-12-05 21:02:58.023323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.061 [2024-12-05 21:02:58.037324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.061 [2024-12-05 21:02:58.037343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.061 [2024-12-05 21:02:58.051186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.061 [2024-12-05 21:02:58.051204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.061 [2024-12-05 21:02:58.065233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.061 [2024-12-05 21:02:58.065251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.061 [2024-12-05 21:02:58.079541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.061 [2024-12-05 21:02:58.079560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.061 [2024-12-05 21:02:58.090994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.061 [2024-12-05 21:02:58.091012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.061 [2024-12-05 21:02:58.105487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.061 [2024-12-05 21:02:58.105505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.061 [2024-12-05 21:02:58.118988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.061 [2024-12-05 21:02:58.119012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.061 [2024-12-05 21:02:58.132545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.061 [2024-12-05 21:02:58.132564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.061 [2024-12-05 21:02:58.146563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.061 [2024-12-05 21:02:58.146581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.061 [2024-12-05 21:02:58.160457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.061 [2024-12-05 21:02:58.160482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.320 [2024-12-05 21:02:58.174065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.320 [2024-12-05 21:02:58.174084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.320 [2024-12-05 21:02:58.187935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.320 [2024-12-05 21:02:58.187953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.320 [2024-12-05 21:02:58.201906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.320 [2024-12-05 21:02:58.201924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.320 [2024-12-05 21:02:58.216265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.320 [2024-12-05 21:02:58.216285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.320 [2024-12-05 21:02:58.232435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.320 [2024-12-05 21:02:58.232454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.320 [2024-12-05 21:02:58.246845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.320 [2024-12-05 21:02:58.246863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.320 [2024-12-05 21:02:58.262686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.320 [2024-12-05 21:02:58.262705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.320 [2024-12-05 21:02:58.276582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.320 [2024-12-05 21:02:58.276600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.320 [2024-12-05 21:02:58.291129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.320 [2024-12-05 21:02:58.291148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.320 [2024-12-05 21:02:58.306461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.320 [2024-12-05 21:02:58.306479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.320 [2024-12-05 21:02:58.320602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.320 [2024-12-05 21:02:58.320620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.320 [2024-12-05 21:02:58.334435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.320 [2024-12-05 21:02:58.334453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.320 [2024-12-05 21:02:58.348343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.320 [2024-12-05 21:02:58.348361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.320 [2024-12-05 21:02:58.361630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.320 [2024-12-05 21:02:58.361648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.320 [2024-12-05 21:02:58.375335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.320 [2024-12-05 21:02:58.375352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.320 [2024-12-05 21:02:58.389056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.320 [2024-12-05 21:02:58.389078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.320 [2024-12-05 21:02:58.403230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.320 [2024-12-05 21:02:58.403248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.320 [2024-12-05 21:02:58.416883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.320 [2024-12-05 21:02:58.416902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.579 [2024-12-05 21:02:58.430608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.579 [2024-12-05 21:02:58.430627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.579 [2024-12-05 21:02:58.444074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.579 [2024-12-05 21:02:58.444092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.579 [2024-12-05 21:02:58.457613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.579 [2024-12-05 21:02:58.457631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.579 [2024-12-05 21:02:58.471289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.579 [2024-12-05 21:02:58.471308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.579 [2024-12-05 21:02:58.484916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.579 [2024-12-05 21:02:58.484934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.579 [2024-12-05 21:02:58.498700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.579 [2024-12-05 21:02:58.498717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.579 16921.67 IOPS, 132.20 MiB/s [2024-12-05T20:02:58.687Z] [2024-12-05 21:02:58.512866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.579 [2024-12-05 21:02:58.512883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.579 [2024-12-05 21:02:58.526814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.579 [2024-12-05 21:02:58.526832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.579 [2024-12-05 21:02:58.540508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.579 [2024-12-05 21:02:58.540526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.579 [2024-12-05 21:02:58.554699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.579 [2024-12-05 21:02:58.554717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.580 [2024-12-05 21:02:58.568237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.580 [2024-12-05 21:02:58.568256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.580 [2024-12-05 21:02:58.581917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.580 [2024-12-05 21:02:58.581935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.580 [2024-12-05 21:02:58.595724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.580 [2024-12-05 21:02:58.595741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.580 [2024-12-05 21:02:58.609288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.580 [2024-12-05 21:02:58.609307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.580 [2024-12-05 21:02:58.623041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.580 [2024-12-05 21:02:58.623058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.580 [2024-12-05 21:02:58.636602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.580 [2024-12-05 21:02:58.636620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.580 [2024-12-05 21:02:58.650353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.580 [2024-12-05 21:02:58.650377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.580 [2024-12-05 21:02:58.664169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.580 [2024-12-05 21:02:58.664186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.580 [2024-12-05 21:02:58.677881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.580 [2024-12-05 21:02:58.677899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.691581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.691599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.705069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.705086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.718862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.718881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.732651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.732669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.746271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.746290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.760378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.760395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.773916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.773933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.787900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.787918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.801548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.801566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.815592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.815610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.829228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.829246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.843212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.843230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.857448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.857467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.871412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.871431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.885573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.885592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.899257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.899274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.913208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.913227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.926944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.926962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.839 [2024-12-05 21:02:58.940513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.839 [2024-12-05 21:02:58.940531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.098 [2024-12-05 21:02:58.954261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.098 [2024-12-05 21:02:58.954278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.098 [2024-12-05 21:02:58.967961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.098 [2024-12-05 21:02:58.967979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.098 [2024-12-05 21:02:58.981139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.098 [2024-12-05 21:02:58.981157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.098 [2024-12-05 21:02:58.994841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.098 [2024-12-05 21:02:58.994858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.099 [2024-12-05 21:02:59.008755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.099 [2024-12-05 21:02:59.008772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.099 [2024-12-05 21:02:59.022530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.099 [2024-12-05 21:02:59.022548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.099 [2024-12-05 21:02:59.036496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.099 [2024-12-05 21:02:59.036514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.099 [2024-12-05 21:02:59.050533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.099 [2024-12-05 21:02:59.050552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.099 [2024-12-05 21:02:59.061690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.099 [2024-12-05 21:02:59.061708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.099 [2024-12-05 21:02:59.076327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.099 [2024-12-05 21:02:59.076346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.099 [2024-12-05 21:02:59.087582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.099 [2024-12-05 21:02:59.087601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.099 [2024-12-05 21:02:59.101845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.099 [2024-12-05 21:02:59.101863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.099 [2024-12-05 21:02:59.115165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.099 [2024-12-05 21:02:59.115183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.099 [2024-12-05 21:02:59.128804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.099 [2024-12-05 21:02:59.128822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.099 [2024-12-05 21:02:59.142318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.099 [2024-12-05 21:02:59.142336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.099 [2024-12-05 21:02:59.156358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.099 [2024-12-05 21:02:59.156381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.099 [2024-12-05 21:02:59.169986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.099 [2024-12-05 21:02:59.170003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.099 [2024-12-05 21:02:59.183738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.099 [2024-12-05 21:02:59.183755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.099 [2024-12-05 21:02:59.197437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.099 [2024-12-05 21:02:59.197455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.211398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.211415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.225288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.225305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.239055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.239074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.252940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.252959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.266453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.266471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.280144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.280162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.293993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.294013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.307737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.307758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.321847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.321867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.335567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.335588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.349633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.349652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.363003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.363021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.376735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.376754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.390448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.390467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.404583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.404600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.415965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.415983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.429861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.429879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.443187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.443205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.358 [2024-12-05 21:02:59.457014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.358 [2024-12-05 21:02:59.457033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 [2024-12-05 21:02:59.471090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.471109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 [2024-12-05 21:02:59.482704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.482722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 [2024-12-05 21:02:59.496543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.496562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 16945.75 IOPS, 132.39 MiB/s [2024-12-05T20:02:59.726Z] [2024-12-05 21:02:59.510092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.510111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 [2024-12-05 21:02:59.524216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.524236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 [2024-12-05 21:02:59.537657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.537676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 [2024-12-05 21:02:59.551791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.551809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 [2024-12-05 21:02:59.565800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.565819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 [2024-12-05 21:02:59.579620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.579638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 [2024-12-05 21:02:59.593198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.593216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 [2024-12-05 21:02:59.607031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.607050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 [2024-12-05 21:02:59.621110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.621128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 [2024-12-05 21:02:59.635060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.635079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 [2024-12-05 21:02:59.648703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.648722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 [2024-12-05 21:02:59.662871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.662890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 [2024-12-05 21:02:59.676753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.676780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 [2024-12-05 21:02:59.690559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.690578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 [2024-12-05 21:02:59.704441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.704459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.618 [2024-12-05 21:02:59.718149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.618 [2024-12-05 21:02:59.718167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.731907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.731925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.745693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.745711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.759355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.759379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.773377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.773395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.787060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.787079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.800623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.800641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.814563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.814581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.827857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.827876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.841607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.841635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.855151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.855169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.869079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.869097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.882950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.882969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.896716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.896734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.910191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.910210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.924078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.924096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.937808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.937830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.951310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.951329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.964867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.964885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.878 [2024-12-05 21:02:59.978949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.878 [2024-12-05 21:02:59.978968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.137 [2024-12-05 21:02:59.992331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.137 [2024-12-05 21:02:59.992349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.137 [2024-12-05 21:03:00.007503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.137 [2024-12-05 21:03:00.007523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.137 [2024-12-05 21:03:00.022786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.137 [2024-12-05 21:03:00.022805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.137 [2024-12-05 21:03:00.036079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.137 [2024-12-05 21:03:00.036101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.137 [2024-12-05 21:03:00.050256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.137 [2024-12-05 21:03:00.050275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.138 [2024-12-05 21:03:00.063805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.138 [2024-12-05 21:03:00.063825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.138 [2024-12-05 21:03:00.077205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.138 [2024-12-05 21:03:00.077224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.138 [2024-12-05 21:03:00.091323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.138 [2024-12-05 21:03:00.091341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.138 [2024-12-05 21:03:00.100394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.138 [2024-12-05 21:03:00.100412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.138 [2024-12-05 21:03:00.109891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.138 [2024-12-05 21:03:00.109909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.138 [2024-12-05 21:03:00.119512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.138 [2024-12-05 21:03:00.119530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.138 [2024-12-05 21:03:00.128108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.138 [2024-12-05 21:03:00.128126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.138 [2024-12-05 21:03:00.137343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.138 [2024-12-05 21:03:00.137361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.138 [2024-12-05 21:03:00.146190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.138 [2024-12-05 21:03:00.146208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.138 [2024-12-05 21:03:00.155499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.138 [2024-12-05 21:03:00.155517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.138 [2024-12-05 21:03:00.169613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.138 [2024-12-05 21:03:00.169637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.138 [2024-12-05 21:03:00.183185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.138 [2024-12-05 21:03:00.183204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.138 [2024-12-05 21:03:00.196994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.138 [2024-12-05 21:03:00.197012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.138 [2024-12-05 21:03:00.211104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.138 [2024-12-05 21:03:00.211123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.138 [2024-12-05 21:03:00.221995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.138 [2024-12-05 21:03:00.222014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.138 [2024-12-05 21:03:00.236681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.138 [2024-12-05 21:03:00.236700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.396 [2024-12-05 21:03:00.247772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.396 [2024-12-05 21:03:00.247789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.396 [2024-12-05 21:03:00.262305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.396 [2024-12-05 21:03:00.262325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.396 [2024-12-05 21:03:00.275847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.396 [2024-12-05 21:03:00.275868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.396 [2024-12-05 21:03:00.289712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.396 [2024-12-05 21:03:00.289731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.396 [2024-12-05 21:03:00.302907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.396 [2024-12-05 21:03:00.302925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.396 [2024-12-05 21:03:00.317147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.396 [2024-12-05 21:03:00.317165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.396 [2024-12-05 21:03:00.330618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.396 [2024-12-05 21:03:00.330636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.396 [2024-12-05 21:03:00.344721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.396 [2024-12-05 21:03:00.344739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.396 [2024-12-05 21:03:00.358823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.396 [2024-12-05 21:03:00.358841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.396 [2024-12-05 21:03:00.372217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.396 [2024-12-05 21:03:00.372251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.396 [2024-12-05 21:03:00.386472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.396 [2024-12-05 21:03:00.386490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.396 [2024-12-05 21:03:00.397241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.396 [2024-12-05 21:03:00.397260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.396 [2024-12-05 21:03:00.411388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.396 [2024-12-05 21:03:00.411406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.396 [2024-12-05 21:03:00.425455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.396 [2024-12-05 21:03:00.425473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.396 [2024-12-05 21:03:00.439219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.396 [2024-12-05 21:03:00.439237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.396 [2024-12-05 21:03:00.453118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.396 [2024-12-05 21:03:00.453136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.397 [2024-12-05 21:03:00.467224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.397 [2024-12-05 21:03:00.467242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.397 [2024-12-05 21:03:00.481466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.397 [2024-12-05 21:03:00.481484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.397 [2024-12-05 21:03:00.492647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.397 [2024-12-05 21:03:00.492664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.656 16934.00 IOPS, 132.30 MiB/s [2024-12-05T20:03:00.764Z] [2024-12-05 21:03:00.506649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.657 [2024-12-05 21:03:00.506667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.657 00:09:52.657 Latency(us) 00:09:52.657 [2024-12-05T20:03:00.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.657 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:52.657 Nvme1n1 : 5.01 16936.46 132.32 0.00 0.00 7550.70 3479.65 15978.30 00:09:52.657 [2024-12-05T20:03:00.765Z] =================================================================================================================== 00:09:52.657 [2024-12-05T20:03:00.765Z] Total : 16936.46 132.32 0.00 0.00 7550.70 3479.65 15978.30 00:09:52.657 [2024-12-05 21:03:00.516312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.657 [2024-12-05 21:03:00.516328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.657 [2024-12-05 21:03:00.528342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.657 [2024-12-05 21:03:00.528356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.657 [2024-12-05 21:03:00.540412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.657 [2024-12-05 21:03:00.540430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.657 [2024-12-05 21:03:00.552414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.657 [2024-12-05 21:03:00.552444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.657 [2024-12-05 21:03:00.564444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.657 [2024-12-05 21:03:00.564458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.657 [2024-12-05 21:03:00.576471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.657 [2024-12-05 21:03:00.576485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.657 [2024-12-05 21:03:00.588503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.657 [2024-12-05 21:03:00.588519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.657 [2024-12-05 21:03:00.600535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.657 [2024-12-05 21:03:00.600550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.657 [2024-12-05 21:03:00.612566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.657 [2024-12-05 21:03:00.612581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.657 [2024-12-05 21:03:00.624597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.657 [2024-12-05 21:03:00.624609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.657 [2024-12-05 21:03:00.636626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.657 [2024-12-05 21:03:00.636636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.657 [2024-12-05 21:03:00.648660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.657 [2024-12-05 21:03:00.648672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.657 [2024-12-05 21:03:00.660692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.657 [2024-12-05 21:03:00.660702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.657 [2024-12-05 21:03:00.672725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.657 [2024-12-05 21:03:00.672735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1186368) - No such process 00:09:52.657 21:03:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1186368 00:09:52.657 21:03:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.657 21:03:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.657 21:03:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.657 21:03:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.657 21:03:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:52.657 21:03:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.657 21:03:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.657 delay0 00:09:52.657 21:03:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.657 21:03:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:52.657 21:03:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.657 21:03:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.657 21:03:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.657 21:03:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:52.917 [2024-12-05 21:03:00.829439] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:01.036 Initializing NVMe Controllers 00:10:01.036 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:01.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:01.036 Initialization complete. Launching workers. 00:10:01.036 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5784 00:10:01.036 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 6068, failed to submit 36 00:10:01.036 success 5878, unsuccessful 190, failed 0 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:01.036 rmmod nvme_tcp 00:10:01.036 rmmod nvme_fabrics 00:10:01.036 rmmod nvme_keyring 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1184511 ']' 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1184511 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1184511 ']' 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1184511 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1184511 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1184511' 00:10:01.036 killing process with pid 1184511 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1184511 00:10:01.036 21:03:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1184511 00:10:01.036 21:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:01.036 21:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:01.036 21:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:01.036 21:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:01.036 21:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:01.036 21:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:01.036 21:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:01.036 21:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:01.036 21:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:01.036 21:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.036 21:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.036 21:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.995 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:01.995 00:10:01.995 real 0m32.184s 00:10:01.995 user 0m43.176s 00:10:01.995 sys 0m11.413s 00:10:01.995 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.995 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.995 ************************************ 00:10:01.995 END TEST nvmf_zcopy 00:10:01.995 ************************************ 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:02.299 ************************************ 00:10:02.299 START TEST nvmf_nmic 00:10:02.299 ************************************ 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:02.299 * Looking for test storage... 00:10:02.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:02.299 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:02.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.300 --rc genhtml_branch_coverage=1 00:10:02.300 --rc genhtml_function_coverage=1 00:10:02.300 --rc genhtml_legend=1 00:10:02.300 --rc geninfo_all_blocks=1 00:10:02.300 --rc geninfo_unexecuted_blocks=1 00:10:02.300 00:10:02.300 ' 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:02.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.300 --rc genhtml_branch_coverage=1 00:10:02.300 --rc genhtml_function_coverage=1 00:10:02.300 --rc genhtml_legend=1 00:10:02.300 --rc geninfo_all_blocks=1 00:10:02.300 --rc geninfo_unexecuted_blocks=1 00:10:02.300 00:10:02.300 ' 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:02.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.300 --rc genhtml_branch_coverage=1 00:10:02.300 --rc genhtml_function_coverage=1 00:10:02.300 --rc genhtml_legend=1 00:10:02.300 --rc geninfo_all_blocks=1 00:10:02.300 --rc geninfo_unexecuted_blocks=1 00:10:02.300 00:10:02.300 ' 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:02.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.300 --rc genhtml_branch_coverage=1 00:10:02.300 --rc genhtml_function_coverage=1 00:10:02.300 --rc genhtml_legend=1 00:10:02.300 --rc geninfo_all_blocks=1 00:10:02.300 --rc geninfo_unexecuted_blocks=1 00:10:02.300 00:10:02.300 ' 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:02.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:02.300 21:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:08.893 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:08.894 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:08.894 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:08.894 Found net devices under 0000:86:00.0: cvl_0_0 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:08.894 Found net devices under 0000:86:00.1: cvl_0_1 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:08.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:10:08.894 00:10:08.894 --- 10.0.0.2 ping statistics --- 00:10:08.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.894 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:10:08.894 00:10:08.894 --- 10.0.0.1 ping statistics --- 00:10:08.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.894 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1192657 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1192657 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1192657 ']' 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.894 [2024-12-05 21:03:16.374208] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:10:08.894 [2024-12-05 21:03:16.374249] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.894 [2024-12-05 21:03:16.451251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.894 [2024-12-05 21:03:16.494840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.894 [2024-12-05 21:03:16.494877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.894 [2024-12-05 21:03:16.494884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.894 [2024-12-05 21:03:16.494890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.894 [2024-12-05 21:03:16.494895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.894 [2024-12-05 21:03:16.496299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.894 [2024-12-05 21:03:16.496412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.894 [2024-12-05 21:03:16.496455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.894 [2024-12-05 21:03:16.496455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:08.894 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.895 [2024-12-05 21:03:16.642842] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.895 Malloc0 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.895 [2024-12-05 21:03:16.703357] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:08.895 test case1: single bdev can't be used in multiple subsystems 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.895 [2024-12-05 21:03:16.731239] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:08.895 [2024-12-05 21:03:16.731258] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:08.895 [2024-12-05 21:03:16.731266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.895 request: 00:10:08.895 { 00:10:08.895 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:08.895 "namespace": { 00:10:08.895 "bdev_name": "Malloc0", 00:10:08.895 "no_auto_visible": false, 00:10:08.895 "hide_metadata": false 00:10:08.895 }, 00:10:08.895 "method": "nvmf_subsystem_add_ns", 00:10:08.895 "req_id": 1 00:10:08.895 } 00:10:08.895 Got JSON-RPC error response 00:10:08.895 response: 00:10:08.895 { 00:10:08.895 "code": -32602, 00:10:08.895 "message": "Invalid parameters" 00:10:08.895 } 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:08.895 Adding namespace failed - expected result. 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:08.895 test case2: host connect to nvmf target in multiple paths 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.895 [2024-12-05 21:03:16.743395] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.895 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:10.272 21:03:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:11.208 21:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:11.208 21:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:11.208 21:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:11.208 21:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:11.208 21:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:13.113 21:03:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:13.113 21:03:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:13.113 21:03:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:13.113 21:03:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:13.113 21:03:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:13.113 21:03:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:13.113 21:03:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:13.113 [global] 00:10:13.113 thread=1 00:10:13.113 invalidate=1 00:10:13.113 rw=write 00:10:13.113 time_based=1 00:10:13.113 runtime=1 00:10:13.113 ioengine=libaio 00:10:13.113 direct=1 00:10:13.113 bs=4096 00:10:13.113 iodepth=1 00:10:13.113 norandommap=0 00:10:13.113 numjobs=1 00:10:13.113 00:10:13.113 verify_dump=1 00:10:13.113 verify_backlog=512 00:10:13.113 verify_state_save=0 00:10:13.113 do_verify=1 00:10:13.113 verify=crc32c-intel 00:10:13.113 [job0] 00:10:13.113 filename=/dev/nvme0n1 00:10:13.368 Could not set queue depth (nvme0n1) 00:10:13.625 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.625 fio-3.35 00:10:13.625 Starting 1 thread 00:10:14.561 00:10:14.561 job0: (groupid=0, jobs=1): err= 0: pid=1193550: Thu Dec 5 21:03:22 2024 00:10:14.561 read: IOPS=337, BW=1350KiB/s (1383kB/s)(1364KiB/1010msec) 00:10:14.561 slat (nsec): min=6169, max=29055, avg=8398.83, stdev=3906.41 00:10:14.561 clat (usec): min=144, max=41074, avg=2747.40, stdev=9800.93 00:10:14.561 lat (usec): min=152, max=41097, avg=2755.80, stdev=9804.36 00:10:14.561 clat percentiles (usec): 00:10:14.561 | 1.00th=[ 155], 5.00th=[ 172], 10.00th=[ 212], 20.00th=[ 233], 00:10:14.561 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:10:14.561 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[41157], 00:10:14.561 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:14.561 | 99.99th=[41157] 00:10:14.561 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:10:14.561 slat (nsec): min=9317, max=39699, avg=10337.79, stdev=1721.73 00:10:14.561 clat (usec): min=105, max=1192, avg=122.69, stdev=49.29 00:10:14.561 lat (usec): min=116, max=1202, avg=133.03, stdev=49.51 00:10:14.561 clat percentiles (usec): 00:10:14.561 | 1.00th=[ 108], 5.00th=[ 111], 10.00th=[ 113], 20.00th=[ 114], 00:10:14.561 | 30.00th=[ 116], 40.00th=[ 117], 50.00th=[ 118], 60.00th=[ 120], 00:10:14.561 | 70.00th=[ 121], 80.00th=[ 124], 90.00th=[ 137], 95.00th=[ 147], 00:10:14.561 | 99.00th=[ 159], 99.50th=[ 233], 99.90th=[ 1188], 99.95th=[ 1188], 00:10:14.561 | 99.99th=[ 1188] 00:10:14.561 bw ( KiB/s): min= 4087, max= 4087, per=100.00%, avg=4087.00, stdev= 0.00, samples=1 00:10:14.561 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:14.561 lat (usec) : 250=84.99%, 500=12.43% 00:10:14.561 lat (msec) : 2=0.12%, 50=2.46% 00:10:14.561 cpu : usr=0.40%, sys=0.79%, ctx=853, majf=0, minf=1 00:10:14.561 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:14.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.561 issued rwts: total=341,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.561 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:14.561 00:10:14.561 Run status group 0 (all jobs): 00:10:14.561 READ: bw=1350KiB/s (1383kB/s), 1350KiB/s-1350KiB/s (1383kB/s-1383kB/s), io=1364KiB (1397kB), run=1010-1010msec 00:10:14.561 WRITE: bw=2028KiB/s (2076kB/s), 2028KiB/s-2028KiB/s (2076kB/s-2076kB/s), io=2048KiB (2097kB), run=1010-1010msec 00:10:14.561 00:10:14.561 Disk stats (read/write): 00:10:14.561 nvme0n1: ios=388/512, merge=0/0, ticks=954/60, in_queue=1014, util=95.39% 00:10:14.561 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:14.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:14.820 rmmod nvme_tcp 00:10:14.820 rmmod nvme_fabrics 00:10:14.820 rmmod nvme_keyring 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1192657 ']' 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1192657 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1192657 ']' 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1192657 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.820 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1192657 00:10:15.079 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.079 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.079 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1192657' 00:10:15.079 killing process with pid 1192657 00:10:15.079 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1192657 00:10:15.079 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1192657 00:10:15.079 21:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:15.079 21:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:15.079 21:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:15.079 21:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:15.079 21:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:15.079 21:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:15.079 21:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:15.079 21:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.079 21:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:15.079 21:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.079 21:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.079 21:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:17.615 00:10:17.615 real 0m15.043s 00:10:17.615 user 0m33.646s 00:10:17.615 sys 0m5.250s 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.615 ************************************ 00:10:17.615 END TEST nvmf_nmic 00:10:17.615 ************************************ 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:17.615 ************************************ 00:10:17.615 START TEST nvmf_fio_target 00:10:17.615 ************************************ 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:17.615 * Looking for test storage... 00:10:17.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:17.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.615 --rc genhtml_branch_coverage=1 00:10:17.615 --rc genhtml_function_coverage=1 00:10:17.615 --rc genhtml_legend=1 00:10:17.615 --rc geninfo_all_blocks=1 00:10:17.615 --rc geninfo_unexecuted_blocks=1 00:10:17.615 00:10:17.615 ' 00:10:17.615 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:17.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.615 --rc genhtml_branch_coverage=1 00:10:17.615 --rc genhtml_function_coverage=1 00:10:17.615 --rc genhtml_legend=1 00:10:17.615 --rc geninfo_all_blocks=1 00:10:17.616 --rc geninfo_unexecuted_blocks=1 00:10:17.616 00:10:17.616 ' 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:17.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.616 --rc genhtml_branch_coverage=1 00:10:17.616 --rc genhtml_function_coverage=1 00:10:17.616 --rc genhtml_legend=1 00:10:17.616 --rc geninfo_all_blocks=1 00:10:17.616 --rc geninfo_unexecuted_blocks=1 00:10:17.616 00:10:17.616 ' 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:17.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.616 --rc genhtml_branch_coverage=1 00:10:17.616 --rc genhtml_function_coverage=1 00:10:17.616 --rc genhtml_legend=1 00:10:17.616 --rc geninfo_all_blocks=1 00:10:17.616 --rc geninfo_unexecuted_blocks=1 00:10:17.616 00:10:17.616 ' 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:17.616 21:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:24.176 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:24.176 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:24.176 Found net devices under 0000:86:00.0: cvl_0_0 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.176 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:24.176 Found net devices under 0000:86:00.1: cvl_0_1 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:24.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:24.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:10:24.177 00:10:24.177 --- 10.0.0.2 ping statistics --- 00:10:24.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.177 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:24.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:24.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:10:24.177 00:10:24.177 --- 10.0.0.1 ping statistics --- 00:10:24.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.177 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1197322 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1197322 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1197322 ']' 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.177 [2024-12-05 21:03:31.496683] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:10:24.177 [2024-12-05 21:03:31.496727] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.177 [2024-12-05 21:03:31.574333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:24.177 [2024-12-05 21:03:31.618577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.177 [2024-12-05 21:03:31.618614] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.177 [2024-12-05 21:03:31.618621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.177 [2024-12-05 21:03:31.618628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.177 [2024-12-05 21:03:31.618635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.177 [2024-12-05 21:03:31.620120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.177 [2024-12-05 21:03:31.620231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.177 [2024-12-05 21:03:31.620359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.177 [2024-12-05 21:03:31.620360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:24.177 [2024-12-05 21:03:31.930093] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.177 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.177 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:24.177 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.436 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:24.436 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.695 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:24.695 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.695 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:24.695 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:24.954 21:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:25.213 21:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:25.213 21:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:25.471 21:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:25.471 21:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:25.730 21:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:25.730 21:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:25.730 21:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:25.989 21:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:25.989 21:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:26.247 21:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:26.247 21:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:26.507 21:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.507 [2024-12-05 21:03:34.602035] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.766 21:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:26.766 21:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:27.025 21:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:28.398 21:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:28.398 21:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:28.398 21:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:28.398 21:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:28.398 21:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:28.398 21:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:30.302 21:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:30.302 21:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:30.302 21:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:30.302 21:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:30.302 21:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:30.302 21:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:30.302 21:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:30.302 [global] 00:10:30.302 thread=1 00:10:30.302 invalidate=1 00:10:30.302 rw=write 00:10:30.302 time_based=1 00:10:30.302 runtime=1 00:10:30.302 ioengine=libaio 00:10:30.302 direct=1 00:10:30.302 bs=4096 00:10:30.302 iodepth=1 00:10:30.302 norandommap=0 00:10:30.302 numjobs=1 00:10:30.302 00:10:30.302 verify_dump=1 00:10:30.302 verify_backlog=512 00:10:30.302 verify_state_save=0 00:10:30.302 do_verify=1 00:10:30.302 verify=crc32c-intel 00:10:30.302 [job0] 00:10:30.302 filename=/dev/nvme0n1 00:10:30.302 [job1] 00:10:30.302 filename=/dev/nvme0n2 00:10:30.302 [job2] 00:10:30.302 filename=/dev/nvme0n3 00:10:30.302 [job3] 00:10:30.302 filename=/dev/nvme0n4 00:10:30.302 Could not set queue depth (nvme0n1) 00:10:30.302 Could not set queue depth (nvme0n2) 00:10:30.302 Could not set queue depth (nvme0n3) 00:10:30.302 Could not set queue depth (nvme0n4) 00:10:30.561 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.561 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.561 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.561 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.561 fio-3.35 00:10:30.561 Starting 4 threads 00:10:31.940 00:10:31.940 job0: (groupid=0, jobs=1): err= 0: pid=1198786: Thu Dec 5 21:03:39 2024 00:10:31.940 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:31.940 slat (nsec): min=7307, max=49566, avg=8537.75, stdev=1608.58 00:10:31.940 clat (usec): min=184, max=555, avg=240.82, stdev=43.64 00:10:31.940 lat (usec): min=193, max=571, avg=249.35, stdev=43.69 00:10:31.940 clat percentiles (usec): 00:10:31.940 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:10:31.940 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 241], 00:10:31.940 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 273], 00:10:31.940 | 99.00th=[ 490], 99.50th=[ 506], 99.90th=[ 545], 99.95th=[ 553], 00:10:31.940 | 99.99th=[ 553] 00:10:31.940 write: IOPS=2446, BW=9786KiB/s (10.0MB/s)(9796KiB/1001msec); 0 zone resets 00:10:31.940 slat (nsec): min=11072, max=46604, avg=12572.16, stdev=1956.40 00:10:31.940 clat (usec): min=118, max=336, avg=181.73, stdev=39.36 00:10:31.940 lat (usec): min=130, max=375, avg=194.30, stdev=39.51 00:10:31.940 clat percentiles (usec): 00:10:31.940 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 149], 00:10:31.940 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 169], 60.00th=[ 178], 00:10:31.940 | 70.00th=[ 190], 80.00th=[ 225], 90.00th=[ 251], 95.00th=[ 260], 00:10:31.940 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 289], 99.95th=[ 306], 00:10:31.940 | 99.99th=[ 338] 00:10:31.940 bw ( KiB/s): min= 8848, max= 8848, per=28.92%, avg=8848.00, stdev= 0.00, samples=1 00:10:31.940 iops : min= 2212, max= 2212, avg=2212.00, stdev= 0.00, samples=1 00:10:31.940 lat (usec) : 250=83.79%, 500=15.81%, 750=0.40% 00:10:31.940 cpu : usr=3.30%, sys=8.10%, ctx=4498, majf=0, minf=1 00:10:31.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.941 issued rwts: total=2048,2449,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.941 job1: (groupid=0, jobs=1): err= 0: pid=1198795: Thu Dec 5 21:03:39 2024 00:10:31.941 read: IOPS=197, BW=790KiB/s (809kB/s)(808KiB/1023msec) 00:10:31.941 slat (nsec): min=6632, max=18003, avg=7969.04, stdev=1303.79 00:10:31.941 clat (usec): min=199, max=42132, avg=4548.42, stdev=12519.32 00:10:31.941 lat (usec): min=206, max=42141, avg=4556.39, stdev=12519.83 00:10:31.941 clat percentiles (usec): 00:10:31.941 | 1.00th=[ 204], 5.00th=[ 217], 10.00th=[ 229], 20.00th=[ 245], 00:10:31.941 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 285], 00:10:31.941 | 70.00th=[ 306], 80.00th=[ 412], 90.00th=[40633], 95.00th=[41157], 00:10:31.941 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:31.941 | 99.99th=[42206] 00:10:31.941 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:10:31.941 slat (nsec): min=9230, max=39511, avg=10668.91, stdev=2230.92 00:10:31.941 clat (usec): min=136, max=394, avg=186.17, stdev=23.89 00:10:31.941 lat (usec): min=147, max=429, avg=196.83, stdev=24.62 00:10:31.941 clat percentiles (usec): 00:10:31.941 | 1.00th=[ 143], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 169], 00:10:31.941 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:10:31.941 | 70.00th=[ 192], 80.00th=[ 202], 90.00th=[ 219], 95.00th=[ 227], 00:10:31.941 | 99.00th=[ 249], 99.50th=[ 260], 99.90th=[ 396], 99.95th=[ 396], 00:10:31.941 | 99.99th=[ 396] 00:10:31.941 bw ( KiB/s): min= 4096, max= 4096, per=13.39%, avg=4096.00, stdev= 0.00, samples=1 00:10:31.941 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:31.941 lat (usec) : 250=77.73%, 500=19.19% 00:10:31.941 lat (msec) : 2=0.14%, 50=2.94% 00:10:31.941 cpu : usr=0.49%, sys=0.49%, ctx=714, majf=0, minf=2 00:10:31.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.941 issued rwts: total=202,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.941 job2: (groupid=0, jobs=1): err= 0: pid=1198813: Thu Dec 5 21:03:39 2024 00:10:31.941 read: IOPS=2275, BW=9103KiB/s (9321kB/s)(9112KiB/1001msec) 00:10:31.941 slat (nsec): min=7850, max=43282, avg=9101.68, stdev=1725.03 00:10:31.941 clat (usec): min=174, max=399, avg=219.97, stdev=15.53 00:10:31.941 lat (usec): min=182, max=408, avg=229.07, stdev=15.69 00:10:31.941 clat percentiles (usec): 00:10:31.941 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:10:31.941 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 223], 00:10:31.941 | 70.00th=[ 227], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 245], 00:10:31.941 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 314], 99.95th=[ 326], 00:10:31.941 | 99.99th=[ 400] 00:10:31.941 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:31.941 slat (nsec): min=11098, max=64547, avg=12693.43, stdev=2390.30 00:10:31.941 clat (usec): min=126, max=4137, avg=168.32, stdev=80.09 00:10:31.941 lat (usec): min=138, max=4149, avg=181.01, stdev=80.16 00:10:31.941 clat percentiles (usec): 00:10:31.941 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:10:31.941 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:10:31.941 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 192], 00:10:31.941 | 99.00th=[ 239], 99.50th=[ 241], 99.90th=[ 285], 99.95th=[ 326], 00:10:31.941 | 99.99th=[ 4146] 00:10:31.941 bw ( KiB/s): min=11112, max=11112, per=36.32%, avg=11112.00, stdev= 0.00, samples=1 00:10:31.941 iops : min= 2778, max= 2778, avg=2778.00, stdev= 0.00, samples=1 00:10:31.941 lat (usec) : 250=98.55%, 500=1.43% 00:10:31.941 lat (msec) : 10=0.02% 00:10:31.941 cpu : usr=5.30%, sys=6.90%, ctx=4839, majf=0, minf=1 00:10:31.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.941 issued rwts: total=2278,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.941 job3: (groupid=0, jobs=1): err= 0: pid=1198819: Thu Dec 5 21:03:39 2024 00:10:31.941 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:31.941 slat (nsec): min=7528, max=24714, avg=8717.41, stdev=1171.69 00:10:31.941 clat (usec): min=187, max=40861, avg=274.49, stdev=1261.40 00:10:31.941 lat (usec): min=197, max=40870, avg=283.21, stdev=1261.40 00:10:31.941 clat percentiles (usec): 00:10:31.941 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:10:31.941 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 239], 00:10:31.941 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 265], 00:10:31.941 | 99.00th=[ 297], 99.50th=[ 318], 99.90th=[ 979], 99.95th=[40109], 00:10:31.941 | 99.99th=[40633] 00:10:31.941 write: IOPS=2300, BW=9203KiB/s (9424kB/s)(9212KiB/1001msec); 0 zone resets 00:10:31.941 slat (nsec): min=10991, max=53506, avg=12525.03, stdev=2239.85 00:10:31.941 clat (usec): min=128, max=294, avg=164.08, stdev=16.18 00:10:31.941 lat (usec): min=139, max=309, avg=176.60, stdev=16.96 00:10:31.941 clat percentiles (usec): 00:10:31.941 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 153], 00:10:31.941 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:10:31.941 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 192], 00:10:31.941 | 99.00th=[ 223], 99.50th=[ 231], 99.90th=[ 251], 99.95th=[ 277], 00:10:31.941 | 99.99th=[ 293] 00:10:31.941 bw ( KiB/s): min=11080, max=11080, per=36.22%, avg=11080.00, stdev= 0.00, samples=1 00:10:31.941 iops : min= 2770, max= 2770, avg=2770.00, stdev= 0.00, samples=1 00:10:31.941 lat (usec) : 250=91.40%, 500=8.46%, 750=0.07%, 1000=0.02% 00:10:31.941 lat (msec) : 50=0.05% 00:10:31.941 cpu : usr=3.60%, sys=7.30%, ctx=4353, majf=0, minf=1 00:10:31.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.941 issued rwts: total=2048,2303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.941 00:10:31.941 Run status group 0 (all jobs): 00:10:31.941 READ: bw=25.1MiB/s (26.3MB/s), 790KiB/s-9103KiB/s (809kB/s-9321kB/s), io=25.7MiB (26.9MB), run=1001-1023msec 00:10:31.941 WRITE: bw=29.9MiB/s (31.3MB/s), 2002KiB/s-9.99MiB/s (2050kB/s-10.5MB/s), io=30.6MiB (32.0MB), run=1001-1023msec 00:10:31.941 00:10:31.941 Disk stats (read/write): 00:10:31.941 nvme0n1: ios=1730/2048, merge=0/0, ticks=1379/370, in_queue=1749, util=97.90% 00:10:31.941 nvme0n2: ios=210/512, merge=0/0, ticks=724/92, in_queue=816, util=86.88% 00:10:31.941 nvme0n3: ios=2087/2048, merge=0/0, ticks=744/320, in_queue=1064, util=98.33% 00:10:31.941 nvme0n4: ios=1817/2048, merge=0/0, ticks=1394/309, in_queue=1703, util=98.21% 00:10:31.941 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:31.941 [global] 00:10:31.941 thread=1 00:10:31.941 invalidate=1 00:10:31.941 rw=randwrite 00:10:31.941 time_based=1 00:10:31.941 runtime=1 00:10:31.941 ioengine=libaio 00:10:31.941 direct=1 00:10:31.941 bs=4096 00:10:31.941 iodepth=1 00:10:31.941 norandommap=0 00:10:31.941 numjobs=1 00:10:31.941 00:10:31.941 verify_dump=1 00:10:31.941 verify_backlog=512 00:10:31.941 verify_state_save=0 00:10:31.941 do_verify=1 00:10:31.941 verify=crc32c-intel 00:10:31.941 [job0] 00:10:31.941 filename=/dev/nvme0n1 00:10:31.941 [job1] 00:10:31.941 filename=/dev/nvme0n2 00:10:31.941 [job2] 00:10:31.941 filename=/dev/nvme0n3 00:10:31.941 [job3] 00:10:31.941 filename=/dev/nvme0n4 00:10:31.941 Could not set queue depth (nvme0n1) 00:10:31.941 Could not set queue depth (nvme0n2) 00:10:31.941 Could not set queue depth (nvme0n3) 00:10:31.941 Could not set queue depth (nvme0n4) 00:10:32.201 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:32.201 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:32.201 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:32.201 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:32.201 fio-3.35 00:10:32.201 Starting 4 threads 00:10:33.599 00:10:33.599 job0: (groupid=0, jobs=1): err= 0: pid=1199259: Thu Dec 5 21:03:41 2024 00:10:33.599 read: IOPS=965, BW=3861KiB/s (3954kB/s)(3896KiB/1009msec) 00:10:33.599 slat (nsec): min=6920, max=26575, avg=8512.44, stdev=2082.87 00:10:33.600 clat (usec): min=179, max=41056, avg=844.19, stdev=4925.37 00:10:33.600 lat (usec): min=187, max=41066, avg=852.71, stdev=4926.72 00:10:33.600 clat percentiles (usec): 00:10:33.600 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:10:33.600 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:10:33.600 | 70.00th=[ 233], 80.00th=[ 255], 90.00th=[ 277], 95.00th=[ 297], 00:10:33.600 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:33.600 | 99.99th=[41157] 00:10:33.600 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:10:33.600 slat (nsec): min=10405, max=37480, avg=11552.52, stdev=1680.63 00:10:33.600 clat (usec): min=118, max=252, avg=156.14, stdev=14.85 00:10:33.600 lat (usec): min=129, max=289, avg=167.70, stdev=15.21 00:10:33.600 clat percentiles (usec): 00:10:33.600 | 1.00th=[ 128], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 145], 00:10:33.600 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:10:33.600 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 182], 00:10:33.600 | 99.00th=[ 194], 99.50th=[ 198], 99.90th=[ 237], 99.95th=[ 253], 00:10:33.600 | 99.99th=[ 253] 00:10:33.600 bw ( KiB/s): min= 4096, max= 4096, per=15.00%, avg=4096.00, stdev= 0.00, samples=2 00:10:33.600 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:33.600 lat (usec) : 250=89.54%, 500=9.51%, 750=0.20% 00:10:33.600 lat (msec) : 50=0.75% 00:10:33.600 cpu : usr=1.79%, sys=2.98%, ctx=2003, majf=0, minf=1 00:10:33.600 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.600 issued rwts: total=974,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.600 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.600 job1: (groupid=0, jobs=1): err= 0: pid=1199260: Thu Dec 5 21:03:41 2024 00:10:33.600 read: IOPS=512, BW=2048KiB/s (2097kB/s)(2048KiB/1000msec) 00:10:33.600 slat (nsec): min=7396, max=25930, avg=9303.73, stdev=3212.72 00:10:33.600 clat (usec): min=181, max=41475, avg=1672.07, stdev=7540.35 00:10:33.600 lat (usec): min=189, max=41484, avg=1681.37, stdev=7541.58 00:10:33.600 clat percentiles (usec): 00:10:33.600 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 202], 00:10:33.600 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 219], 00:10:33.600 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 269], 00:10:33.600 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:10:33.600 | 99.99th=[41681] 00:10:33.600 write: IOPS=742, BW=2968KiB/s (3039kB/s)(2968KiB/1000msec); 0 zone resets 00:10:33.600 slat (nsec): min=10441, max=38233, avg=11900.18, stdev=2130.32 00:10:33.600 clat (usec): min=125, max=330, avg=170.37, stdev=19.11 00:10:33.600 lat (usec): min=136, max=368, avg=182.27, stdev=19.60 00:10:33.600 clat percentiles (usec): 00:10:33.600 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 153], 00:10:33.600 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:10:33.600 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 202], 00:10:33.600 | 99.00th=[ 215], 99.50th=[ 217], 99.90th=[ 330], 99.95th=[ 330], 00:10:33.600 | 99.99th=[ 330] 00:10:33.600 bw ( KiB/s): min= 4096, max= 4096, per=15.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:33.600 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:33.600 lat (usec) : 250=97.13%, 500=1.36% 00:10:33.600 lat (msec) : 20=0.08%, 50=1.44% 00:10:33.600 cpu : usr=1.60%, sys=1.50%, ctx=1256, majf=0, minf=1 00:10:33.600 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.600 issued rwts: total=512,742,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.600 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.600 job2: (groupid=0, jobs=1): err= 0: pid=1199261: Thu Dec 5 21:03:41 2024 00:10:33.600 read: IOPS=2309, BW=9239KiB/s (9460kB/s)(9248KiB/1001msec) 00:10:33.600 slat (nsec): min=7033, max=46415, avg=8609.25, stdev=1408.91 00:10:33.600 clat (usec): min=171, max=403, avg=223.41, stdev=26.19 00:10:33.600 lat (usec): min=180, max=428, avg=232.02, stdev=26.28 00:10:33.600 clat percentiles (usec): 00:10:33.600 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 204], 00:10:33.600 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:10:33.600 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 273], 95.00th=[ 281], 00:10:33.600 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 371], 99.95th=[ 392], 00:10:33.600 | 99.99th=[ 404] 00:10:33.600 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:33.600 slat (nsec): min=10207, max=42261, avg=11943.96, stdev=1876.81 00:10:33.600 clat (usec): min=133, max=501, avg=162.90, stdev=13.89 00:10:33.600 lat (usec): min=145, max=512, avg=174.84, stdev=14.12 00:10:33.600 clat percentiles (usec): 00:10:33.600 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:10:33.600 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:10:33.600 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 184], 00:10:33.600 | 99.00th=[ 196], 99.50th=[ 204], 99.90th=[ 235], 99.95th=[ 251], 00:10:33.600 | 99.99th=[ 502] 00:10:33.600 bw ( KiB/s): min=11896, max=11896, per=43.58%, avg=11896.00, stdev= 0.00, samples=1 00:10:33.600 iops : min= 2974, max= 2974, avg=2974.00, stdev= 0.00, samples=1 00:10:33.600 lat (usec) : 250=92.92%, 500=7.06%, 750=0.02% 00:10:33.600 cpu : usr=4.20%, sys=8.00%, ctx=4872, majf=0, minf=2 00:10:33.600 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.600 issued rwts: total=2312,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.600 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.600 job3: (groupid=0, jobs=1): err= 0: pid=1199262: Thu Dec 5 21:03:41 2024 00:10:33.600 read: IOPS=2171, BW=8687KiB/s (8896kB/s)(8696KiB/1001msec) 00:10:33.600 slat (nsec): min=7351, max=38848, avg=8627.72, stdev=1273.03 00:10:33.600 clat (usec): min=180, max=284, avg=232.75, stdev=17.81 00:10:33.600 lat (usec): min=189, max=292, avg=241.38, stdev=17.83 00:10:33.600 clat percentiles (usec): 00:10:33.600 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 217], 00:10:33.600 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 239], 00:10:33.600 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 260], 00:10:33.600 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 273], 99.95th=[ 277], 00:10:33.600 | 99.99th=[ 285] 00:10:33.600 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:33.600 slat (nsec): min=10597, max=49715, avg=12000.96, stdev=1805.66 00:10:33.600 clat (usec): min=129, max=286, avg=168.11, stdev=18.75 00:10:33.600 lat (usec): min=140, max=313, avg=180.11, stdev=18.98 00:10:33.600 clat percentiles (usec): 00:10:33.600 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 155], 00:10:33.600 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:10:33.600 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 192], 95.00th=[ 204], 00:10:33.600 | 99.00th=[ 233], 99.50th=[ 265], 99.90th=[ 281], 99.95th=[ 285], 00:10:33.600 | 99.99th=[ 289] 00:10:33.600 bw ( KiB/s): min=11672, max=11672, per=42.76%, avg=11672.00, stdev= 0.00, samples=1 00:10:33.600 iops : min= 2918, max= 2918, avg=2918.00, stdev= 0.00, samples=1 00:10:33.600 lat (usec) : 250=91.40%, 500=8.60% 00:10:33.600 cpu : usr=3.20%, sys=8.60%, ctx=4735, majf=0, minf=1 00:10:33.600 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.601 issued rwts: total=2174,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.601 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.601 00:10:33.601 Run status group 0 (all jobs): 00:10:33.601 READ: bw=23.1MiB/s (24.2MB/s), 2048KiB/s-9239KiB/s (2097kB/s-9460kB/s), io=23.3MiB (24.5MB), run=1000-1009msec 00:10:33.601 WRITE: bw=26.7MiB/s (28.0MB/s), 2968KiB/s-9.99MiB/s (3039kB/s-10.5MB/s), io=26.9MiB (28.2MB), run=1000-1009msec 00:10:33.601 00:10:33.601 Disk stats (read/write): 00:10:33.601 nvme0n1: ios=698/1024, merge=0/0, ticks=1646/154, in_queue=1800, util=98.30% 00:10:33.601 nvme0n2: ios=74/512, merge=0/0, ticks=1620/84, in_queue=1704, util=98.38% 00:10:33.601 nvme0n3: ios=2048/2101, merge=0/0, ticks=426/315, in_queue=741, util=88.97% 00:10:33.601 nvme0n4: ios=2011/2048, merge=0/0, ticks=1423/316, in_queue=1739, util=98.32% 00:10:33.601 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:33.601 [global] 00:10:33.601 thread=1 00:10:33.601 invalidate=1 00:10:33.601 rw=write 00:10:33.601 time_based=1 00:10:33.601 runtime=1 00:10:33.601 ioengine=libaio 00:10:33.601 direct=1 00:10:33.601 bs=4096 00:10:33.601 iodepth=128 00:10:33.601 norandommap=0 00:10:33.601 numjobs=1 00:10:33.601 00:10:33.601 verify_dump=1 00:10:33.601 verify_backlog=512 00:10:33.601 verify_state_save=0 00:10:33.601 do_verify=1 00:10:33.601 verify=crc32c-intel 00:10:33.601 [job0] 00:10:33.601 filename=/dev/nvme0n1 00:10:33.601 [job1] 00:10:33.601 filename=/dev/nvme0n2 00:10:33.601 [job2] 00:10:33.601 filename=/dev/nvme0n3 00:10:33.601 [job3] 00:10:33.601 filename=/dev/nvme0n4 00:10:33.601 Could not set queue depth (nvme0n1) 00:10:33.601 Could not set queue depth (nvme0n2) 00:10:33.601 Could not set queue depth (nvme0n3) 00:10:33.601 Could not set queue depth (nvme0n4) 00:10:33.860 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.860 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.860 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.860 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.860 fio-3.35 00:10:33.860 Starting 4 threads 00:10:35.232 00:10:35.232 job0: (groupid=0, jobs=1): err= 0: pid=1199634: Thu Dec 5 21:03:42 2024 00:10:35.232 read: IOPS=3803, BW=14.9MiB/s (15.6MB/s)(15.5MiB/1044msec) 00:10:35.232 slat (nsec): min=1096, max=14198k, avg=115853.12, stdev=712035.76 00:10:35.232 clat (usec): min=5308, max=80750, avg=15494.40, stdev=10290.15 00:10:35.232 lat (usec): min=5314, max=80757, avg=15610.26, stdev=10331.40 00:10:35.232 clat percentiles (usec): 00:10:35.232 | 1.00th=[ 7701], 5.00th=[10028], 10.00th=[10552], 20.00th=[11731], 00:10:35.232 | 30.00th=[12125], 40.00th=[12649], 50.00th=[13173], 60.00th=[13829], 00:10:35.232 | 70.00th=[14353], 80.00th=[15533], 90.00th=[19006], 95.00th=[23462], 00:10:35.232 | 99.00th=[74974], 99.50th=[80217], 99.90th=[80217], 99.95th=[81265], 00:10:35.232 | 99.99th=[81265] 00:10:35.232 write: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1044msec); 0 zone resets 00:10:35.232 slat (nsec): min=1738, max=8977.2k, avg=126037.21, stdev=548054.11 00:10:35.232 clat (usec): min=8332, max=42718, avg=16936.82, stdev=4837.87 00:10:35.232 lat (usec): min=8340, max=42727, avg=17062.86, stdev=4869.44 00:10:35.232 clat percentiles (usec): 00:10:35.232 | 1.00th=[ 9372], 5.00th=[11207], 10.00th=[11731], 20.00th=[12518], 00:10:35.232 | 30.00th=[13435], 40.00th=[14353], 50.00th=[16909], 60.00th=[19006], 00:10:35.232 | 70.00th=[19530], 80.00th=[20055], 90.00th=[21890], 95.00th=[23462], 00:10:35.232 | 99.00th=[34866], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:10:35.232 | 99.99th=[42730] 00:10:35.232 bw ( KiB/s): min=16072, max=16662, per=23.84%, avg=16367.00, stdev=417.19, samples=2 00:10:35.232 iops : min= 4018, max= 4165, avg=4091.50, stdev=103.94, samples=2 00:10:35.232 lat (msec) : 10=3.55%, 20=81.55%, 50=13.34%, 100=1.56% 00:10:35.232 cpu : usr=3.26%, sys=4.70%, ctx=485, majf=0, minf=1 00:10:35.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:35.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.232 issued rwts: total=3971,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.232 job1: (groupid=0, jobs=1): err= 0: pid=1199635: Thu Dec 5 21:03:42 2024 00:10:35.232 read: IOPS=2459, BW=9836KiB/s (10.1MB/s)(9856KiB/1002msec) 00:10:35.232 slat (nsec): min=1132, max=15397k, avg=204504.73, stdev=1208566.52 00:10:35.232 clat (usec): min=583, max=52762, avg=25753.40, stdev=11746.99 00:10:35.232 lat (usec): min=2538, max=55046, avg=25957.91, stdev=11778.80 00:10:35.232 clat percentiles (usec): 00:10:35.232 | 1.00th=[ 2769], 5.00th=[11076], 10.00th=[11863], 20.00th=[15139], 00:10:35.232 | 30.00th=[19006], 40.00th=[19792], 50.00th=[22152], 60.00th=[27132], 00:10:35.232 | 70.00th=[32375], 80.00th=[37487], 90.00th=[42730], 95.00th=[46924], 00:10:35.232 | 99.00th=[49546], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:10:35.232 | 99.99th=[52691] 00:10:35.232 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:10:35.232 slat (nsec): min=1934, max=18617k, avg=187924.20, stdev=748545.65 00:10:35.232 clat (usec): min=6368, max=44797, avg=23852.69, stdev=8305.30 00:10:35.232 lat (usec): min=6377, max=44805, avg=24040.62, stdev=8341.42 00:10:35.232 clat percentiles (usec): 00:10:35.232 | 1.00th=[ 7898], 5.00th=[11469], 10.00th=[12387], 20.00th=[17957], 00:10:35.232 | 30.00th=[19530], 40.00th=[20841], 50.00th=[22152], 60.00th=[24511], 00:10:35.232 | 70.00th=[28967], 80.00th=[32637], 90.00th=[35390], 95.00th=[37487], 00:10:35.232 | 99.00th=[42730], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:10:35.232 | 99.99th=[44827] 00:10:35.232 bw ( KiB/s): min=12263, max=12263, per=17.86%, avg=12263.00, stdev= 0.00, samples=1 00:10:35.232 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:10:35.232 lat (usec) : 750=0.02% 00:10:35.232 lat (msec) : 4=0.64%, 10=2.05%, 20=36.29%, 50=60.73%, 100=0.28% 00:10:35.232 cpu : usr=1.70%, sys=2.70%, ctx=371, majf=0, minf=1 00:10:35.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:10:35.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.232 issued rwts: total=2464,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.232 job2: (groupid=0, jobs=1): err= 0: pid=1199636: Thu Dec 5 21:03:42 2024 00:10:35.232 read: IOPS=5589, BW=21.8MiB/s (22.9MB/s)(21.9MiB/1003msec) 00:10:35.232 slat (nsec): min=1119, max=10786k, avg=88491.01, stdev=653299.97 00:10:35.232 clat (usec): min=1078, max=23511, avg=11746.21, stdev=3094.89 00:10:35.232 lat (usec): min=3438, max=23536, avg=11834.70, stdev=3135.84 00:10:35.232 clat percentiles (usec): 00:10:35.232 | 1.00th=[ 4047], 5.00th=[ 7308], 10.00th=[ 8586], 20.00th=[10028], 00:10:35.232 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:10:35.232 | 70.00th=[11863], 80.00th=[13698], 90.00th=[15926], 95.00th=[18482], 00:10:35.232 | 99.00th=[20841], 99.50th=[21103], 99.90th=[22152], 99.95th=[22152], 00:10:35.232 | 99.99th=[23462] 00:10:35.232 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:10:35.232 slat (nsec): min=1962, max=9296.8k, avg=76219.33, stdev=492434.00 00:10:35.232 clat (usec): min=355, max=27162, avg=10873.82, stdev=3366.95 00:10:35.232 lat (usec): min=822, max=27170, avg=10950.04, stdev=3403.14 00:10:35.232 clat percentiles (usec): 00:10:35.232 | 1.00th=[ 3195], 5.00th=[ 5145], 10.00th=[ 6980], 20.00th=[ 8979], 00:10:35.232 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:10:35.232 | 70.00th=[11600], 80.00th=[11863], 90.00th=[13173], 95.00th=[15795], 00:10:35.232 | 99.00th=[26608], 99.50th=[26608], 99.90th=[27132], 99.95th=[27132], 00:10:35.232 | 99.99th=[27132] 00:10:35.232 bw ( KiB/s): min=20752, max=24255, per=32.78%, avg=22503.50, stdev=2477.00, samples=2 00:10:35.232 iops : min= 5188, max= 6063, avg=5625.50, stdev=618.72, samples=2 00:10:35.232 lat (usec) : 500=0.01%, 1000=0.10% 00:10:35.232 lat (msec) : 2=0.22%, 4=1.25%, 10=21.74%, 20=74.32%, 50=2.36% 00:10:35.232 cpu : usr=3.79%, sys=5.49%, ctx=572, majf=0, minf=1 00:10:35.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:35.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.232 issued rwts: total=5606,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.232 job3: (groupid=0, jobs=1): err= 0: pid=1199637: Thu Dec 5 21:03:42 2024 00:10:35.232 read: IOPS=5360, BW=20.9MiB/s (22.0MB/s)(21.0MiB/1003msec) 00:10:35.232 slat (nsec): min=1082, max=11000k, avg=88071.85, stdev=578445.79 00:10:35.232 clat (usec): min=1918, max=26719, avg=11649.38, stdev=2748.03 00:10:35.232 lat (usec): min=4153, max=26725, avg=11737.45, stdev=2774.66 00:10:35.232 clat percentiles (usec): 00:10:35.232 | 1.00th=[ 5014], 5.00th=[ 7898], 10.00th=[ 9110], 20.00th=[10683], 00:10:35.232 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:10:35.232 | 70.00th=[11863], 80.00th=[12649], 90.00th=[15270], 95.00th=[16909], 00:10:35.232 | 99.00th=[22676], 99.50th=[26608], 99.90th=[26608], 99.95th=[26608], 00:10:35.232 | 99.99th=[26608] 00:10:35.232 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:10:35.232 slat (nsec): min=1757, max=7874.7k, avg=86965.02, stdev=562327.01 00:10:35.232 clat (usec): min=654, max=30137, avg=11470.88, stdev=3367.75 00:10:35.232 lat (usec): min=663, max=30141, avg=11557.84, stdev=3396.67 00:10:35.232 clat percentiles (usec): 00:10:35.232 | 1.00th=[ 6063], 5.00th=[ 7242], 10.00th=[ 8029], 20.00th=[ 9634], 00:10:35.233 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:10:35.233 | 70.00th=[11469], 80.00th=[12125], 90.00th=[13960], 95.00th=[18744], 00:10:35.233 | 99.00th=[26084], 99.50th=[28443], 99.90th=[30016], 99.95th=[30016], 00:10:35.233 | 99.99th=[30016] 00:10:35.233 bw ( KiB/s): min=22099, max=22912, per=32.78%, avg=22505.50, stdev=574.88, samples=2 00:10:35.233 iops : min= 5524, max= 5728, avg=5626.00, stdev=144.25, samples=2 00:10:35.233 lat (usec) : 750=0.03% 00:10:35.233 lat (msec) : 2=0.03%, 4=0.06%, 10=19.10%, 20=77.91%, 50=2.87% 00:10:35.233 cpu : usr=3.69%, sys=5.19%, ctx=420, majf=0, minf=2 00:10:35.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:35.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.233 issued rwts: total=5377,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.233 00:10:35.233 Run status group 0 (all jobs): 00:10:35.233 READ: bw=65.2MiB/s (68.3MB/s), 9836KiB/s-21.8MiB/s (10.1MB/s-22.9MB/s), io=68.0MiB (71.3MB), run=1002-1044msec 00:10:35.233 WRITE: bw=67.0MiB/s (70.3MB/s), 9.98MiB/s-21.9MiB/s (10.5MB/s-23.0MB/s), io=70.0MiB (73.4MB), run=1002-1044msec 00:10:35.233 00:10:35.233 Disk stats (read/write): 00:10:35.233 nvme0n1: ios=3634/3615, merge=0/0, ticks=21558/23642, in_queue=45200, util=87.47% 00:10:35.233 nvme0n2: ios=2077/2196, merge=0/0, ticks=13456/13572, in_queue=27028, util=96.44% 00:10:35.233 nvme0n3: ios=4649/5111, merge=0/0, ticks=48141/46557, in_queue=94698, util=95.21% 00:10:35.233 nvme0n4: ios=4608/4751, merge=0/0, ticks=29107/28027, in_queue=57134, util=89.30% 00:10:35.233 21:03:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:35.233 [global] 00:10:35.233 thread=1 00:10:35.233 invalidate=1 00:10:35.233 rw=randwrite 00:10:35.233 time_based=1 00:10:35.233 runtime=1 00:10:35.233 ioengine=libaio 00:10:35.233 direct=1 00:10:35.233 bs=4096 00:10:35.233 iodepth=128 00:10:35.233 norandommap=0 00:10:35.233 numjobs=1 00:10:35.233 00:10:35.233 verify_dump=1 00:10:35.233 verify_backlog=512 00:10:35.233 verify_state_save=0 00:10:35.233 do_verify=1 00:10:35.233 verify=crc32c-intel 00:10:35.233 [job0] 00:10:35.233 filename=/dev/nvme0n1 00:10:35.233 [job1] 00:10:35.233 filename=/dev/nvme0n2 00:10:35.233 [job2] 00:10:35.233 filename=/dev/nvme0n3 00:10:35.233 [job3] 00:10:35.233 filename=/dev/nvme0n4 00:10:35.233 Could not set queue depth (nvme0n1) 00:10:35.233 Could not set queue depth (nvme0n2) 00:10:35.233 Could not set queue depth (nvme0n3) 00:10:35.233 Could not set queue depth (nvme0n4) 00:10:35.233 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:35.233 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:35.233 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:35.233 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:35.233 fio-3.35 00:10:35.233 Starting 4 threads 00:10:36.607 00:10:36.607 job0: (groupid=0, jobs=1): err= 0: pid=1200008: Thu Dec 5 21:03:44 2024 00:10:36.607 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec) 00:10:36.607 slat (nsec): min=1173, max=19863k, avg=157687.92, stdev=1071303.46 00:10:36.607 clat (usec): min=4364, max=69002, avg=18326.22, stdev=13693.42 00:10:36.607 lat (usec): min=4372, max=69012, avg=18483.90, stdev=13791.52 00:10:36.607 clat percentiles (usec): 00:10:36.607 | 1.00th=[ 5669], 5.00th=[10028], 10.00th=[10814], 20.00th=[11076], 00:10:36.607 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11863], 60.00th=[12780], 00:10:36.607 | 70.00th=[18744], 80.00th=[21627], 90.00th=[40109], 95.00th=[53216], 00:10:36.607 | 99.00th=[66847], 99.50th=[67634], 99.90th=[68682], 99.95th=[68682], 00:10:36.607 | 99.99th=[68682] 00:10:36.607 write: IOPS=3453, BW=13.5MiB/s (14.1MB/s)(13.7MiB/1014msec); 0 zone resets 00:10:36.607 slat (usec): min=2, max=23134, avg=124.61, stdev=612.19 00:10:36.607 clat (usec): min=2836, max=68967, avg=20595.74, stdev=9459.18 00:10:36.607 lat (usec): min=2846, max=68971, avg=20720.35, stdev=9496.55 00:10:36.607 clat percentiles (usec): 00:10:36.607 | 1.00th=[ 3982], 5.00th=[ 7635], 10.00th=[11469], 20.00th=[13566], 00:10:36.607 | 30.00th=[17957], 40.00th=[19792], 50.00th=[20579], 60.00th=[20841], 00:10:36.607 | 70.00th=[21103], 80.00th=[21365], 90.00th=[29230], 95.00th=[38536], 00:10:36.607 | 99.00th=[58983], 99.50th=[59507], 99.90th=[67634], 99.95th=[68682], 00:10:36.607 | 99.99th=[68682] 00:10:36.607 bw ( KiB/s): min=12040, max=14960, per=18.14%, avg=13500.00, stdev=2064.75, samples=2 00:10:36.607 iops : min= 3010, max= 3740, avg=3375.00, stdev=516.19, samples=2 00:10:36.607 lat (msec) : 4=0.61%, 10=6.22%, 20=51.11%, 50=37.85%, 100=4.21% 00:10:36.607 cpu : usr=3.06%, sys=3.36%, ctx=407, majf=0, minf=1 00:10:36.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:36.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.607 issued rwts: total=3072,3502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.607 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.607 job1: (groupid=0, jobs=1): err= 0: pid=1200009: Thu Dec 5 21:03:44 2024 00:10:36.607 read: IOPS=5978, BW=23.4MiB/s (24.5MB/s)(23.4MiB/1004msec) 00:10:36.607 slat (nsec): min=1167, max=11881k, avg=84502.83, stdev=539897.84 00:10:36.607 clat (usec): min=1732, max=29899, avg=10424.01, stdev=2679.23 00:10:36.607 lat (usec): min=3539, max=29914, avg=10508.51, stdev=2716.53 00:10:36.607 clat percentiles (usec): 00:10:36.607 | 1.00th=[ 4359], 5.00th=[ 7242], 10.00th=[ 8160], 20.00th=[ 9372], 00:10:36.607 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:10:36.607 | 70.00th=[10421], 80.00th=[11600], 90.00th=[12649], 95.00th=[14091], 00:10:36.607 | 99.00th=[23987], 99.50th=[27395], 99.90th=[27657], 99.95th=[27657], 00:10:36.607 | 99.99th=[30016] 00:10:36.607 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:10:36.607 slat (nsec): min=1784, max=14276k, avg=75704.29, stdev=399259.11 00:10:36.607 clat (usec): min=1170, max=32221, avg=10553.50, stdev=2857.64 00:10:36.607 lat (usec): min=1180, max=32252, avg=10629.21, stdev=2891.33 00:10:36.607 clat percentiles (usec): 00:10:36.607 | 1.00th=[ 4752], 5.00th=[ 7373], 10.00th=[ 8455], 20.00th=[ 9372], 00:10:36.607 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10028], 00:10:36.607 | 70.00th=[10159], 80.00th=[10552], 90.00th=[13960], 95.00th=[17957], 00:10:36.607 | 99.00th=[19792], 99.50th=[20055], 99.90th=[22676], 99.95th=[22676], 00:10:36.607 | 99.99th=[32113] 00:10:36.607 bw ( KiB/s): min=24496, max=24656, per=33.03%, avg=24576.00, stdev=113.14, samples=2 00:10:36.607 iops : min= 6124, max= 6164, avg=6144.00, stdev=28.28, samples=2 00:10:36.607 lat (msec) : 2=0.05%, 4=0.47%, 10=53.38%, 20=45.32%, 50=0.79% 00:10:36.607 cpu : usr=3.39%, sys=6.08%, ctx=751, majf=0, minf=2 00:10:36.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:36.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.607 issued rwts: total=6002,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.607 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.607 job2: (groupid=0, jobs=1): err= 0: pid=1200011: Thu Dec 5 21:03:44 2024 00:10:36.607 read: IOPS=3556, BW=13.9MiB/s (14.6MB/s)(14.1MiB/1014msec) 00:10:36.607 slat (nsec): min=1255, max=11757k, avg=117964.94, stdev=797953.67 00:10:36.607 clat (usec): min=4474, max=30979, avg=14161.66, stdev=4463.57 00:10:36.607 lat (usec): min=4485, max=30989, avg=14279.62, stdev=4509.86 00:10:36.607 clat percentiles (usec): 00:10:36.607 | 1.00th=[ 5276], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[11338], 00:10:36.607 | 30.00th=[11994], 40.00th=[11994], 50.00th=[12256], 60.00th=[13960], 00:10:36.607 | 70.00th=[14877], 80.00th=[17433], 90.00th=[20317], 95.00th=[23200], 00:10:36.607 | 99.00th=[29230], 99.50th=[29754], 99.90th=[31065], 99.95th=[31065], 00:10:36.607 | 99.99th=[31065] 00:10:36.607 write: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec); 0 zone resets 00:10:36.607 slat (usec): min=2, max=10907, avg=134.23, stdev=665.03 00:10:36.607 clat (usec): min=3068, max=55483, avg=18871.28, stdev=10878.90 00:10:36.607 lat (usec): min=3078, max=55490, avg=19005.51, stdev=10937.14 00:10:36.607 clat percentiles (usec): 00:10:36.607 | 1.00th=[ 4047], 5.00th=[ 6915], 10.00th=[10421], 20.00th=[11469], 00:10:36.607 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13304], 60.00th=[20841], 00:10:36.607 | 70.00th=[21103], 80.00th=[21627], 90.00th=[36963], 95.00th=[45351], 00:10:36.607 | 99.00th=[52691], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:10:36.607 | 99.99th=[55313] 00:10:36.607 bw ( KiB/s): min=12424, max=19504, per=21.46%, avg=15964.00, stdev=5006.32, samples=2 00:10:36.607 iops : min= 3106, max= 4876, avg=3991.00, stdev=1251.58, samples=2 00:10:36.607 lat (msec) : 4=0.49%, 10=9.10%, 20=61.10%, 50=27.98%, 100=1.32% 00:10:36.607 cpu : usr=2.86%, sys=5.13%, ctx=500, majf=0, minf=1 00:10:36.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:36.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.607 issued rwts: total=3606,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.607 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.607 job3: (groupid=0, jobs=1): err= 0: pid=1200012: Thu Dec 5 21:03:44 2024 00:10:36.607 read: IOPS=4845, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1003msec) 00:10:36.607 slat (nsec): min=1293, max=14974k, avg=114413.78, stdev=849903.50 00:10:36.607 clat (usec): min=1261, max=38429, avg=14058.77, stdev=5031.77 00:10:36.607 lat (usec): min=3992, max=38456, avg=14173.18, stdev=5087.01 00:10:36.607 clat percentiles (usec): 00:10:36.607 | 1.00th=[ 6063], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[10945], 00:10:36.607 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11600], 60.00th=[12649], 00:10:36.607 | 70.00th=[15795], 80.00th=[18482], 90.00th=[20317], 95.00th=[22414], 00:10:36.607 | 99.00th=[36963], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:10:36.607 | 99.99th=[38536] 00:10:36.607 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:36.607 slat (usec): min=2, max=10467, avg=81.56, stdev=410.36 00:10:36.607 clat (usec): min=2527, max=30363, avg=11508.06, stdev=3449.02 00:10:36.607 lat (usec): min=2537, max=30375, avg=11589.62, stdev=3487.25 00:10:36.607 clat percentiles (usec): 00:10:36.607 | 1.00th=[ 3687], 5.00th=[ 6652], 10.00th=[ 8455], 20.00th=[ 9765], 00:10:36.607 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:10:36.607 | 70.00th=[11600], 80.00th=[11863], 90.00th=[15401], 95.00th=[20579], 00:10:36.607 | 99.00th=[20841], 99.50th=[21890], 99.90th=[23200], 99.95th=[25822], 00:10:36.607 | 99.99th=[30278] 00:10:36.607 bw ( KiB/s): min=18288, max=22672, per=27.52%, avg=20480.00, stdev=3099.96, samples=2 00:10:36.607 iops : min= 4572, max= 5668, avg=5120.00, stdev=774.99, samples=2 00:10:36.607 lat (msec) : 2=0.01%, 4=0.72%, 10=13.62%, 20=75.66%, 50=9.99% 00:10:36.607 cpu : usr=3.39%, sys=5.59%, ctx=602, majf=0, minf=1 00:10:36.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:36.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.607 issued rwts: total=4860,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.607 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.607 00:10:36.607 Run status group 0 (all jobs): 00:10:36.607 READ: bw=67.6MiB/s (70.9MB/s), 11.8MiB/s-23.4MiB/s (12.4MB/s-24.5MB/s), io=68.5MiB (71.8MB), run=1003-1014msec 00:10:36.607 WRITE: bw=72.7MiB/s (76.2MB/s), 13.5MiB/s-23.9MiB/s (14.1MB/s-25.1MB/s), io=73.7MiB (77.3MB), run=1003-1014msec 00:10:36.607 00:10:36.607 Disk stats (read/write): 00:10:36.607 nvme0n1: ios=2598/2983, merge=0/0, ticks=46765/58879, in_queue=105644, util=96.49% 00:10:36.607 nvme0n2: ios=5161/5127, merge=0/0, ticks=34141/33505, in_queue=67646, util=92.17% 00:10:36.607 nvme0n3: ios=3089/3583, merge=0/0, ticks=42763/62287, in_queue=105050, util=93.85% 00:10:36.607 nvme0n4: ios=4153/4215, merge=0/0, ticks=56666/47983, in_queue=104649, util=95.38% 00:10:36.607 21:03:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:36.607 21:03:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1200243 00:10:36.607 21:03:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:36.607 21:03:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:36.607 [global] 00:10:36.607 thread=1 00:10:36.607 invalidate=1 00:10:36.607 rw=read 00:10:36.608 time_based=1 00:10:36.608 runtime=10 00:10:36.608 ioengine=libaio 00:10:36.608 direct=1 00:10:36.608 bs=4096 00:10:36.608 iodepth=1 00:10:36.608 norandommap=1 00:10:36.608 numjobs=1 00:10:36.608 00:10:36.608 [job0] 00:10:36.608 filename=/dev/nvme0n1 00:10:36.608 [job1] 00:10:36.608 filename=/dev/nvme0n2 00:10:36.608 [job2] 00:10:36.608 filename=/dev/nvme0n3 00:10:36.608 [job3] 00:10:36.608 filename=/dev/nvme0n4 00:10:36.608 Could not set queue depth (nvme0n1) 00:10:36.608 Could not set queue depth (nvme0n2) 00:10:36.608 Could not set queue depth (nvme0n3) 00:10:36.608 Could not set queue depth (nvme0n4) 00:10:36.865 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.865 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.865 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.865 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.865 fio-3.35 00:10:36.865 Starting 4 threads 00:10:40.146 21:03:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:40.146 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=45764608, buflen=4096 00:10:40.146 fio: pid=1200386, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:40.146 21:03:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:40.146 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=47468544, buflen=4096 00:10:40.146 fio: pid=1200385, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:40.146 21:03:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.146 21:03:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:40.146 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.146 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:40.146 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=49414144, buflen=4096 00:10:40.146 fio: pid=1200383, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:40.403 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=3829760, buflen=4096 00:10:40.403 fio: pid=1200384, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:40.403 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.403 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:40.403 00:10:40.403 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1200383: Thu Dec 5 21:03:48 2024 00:10:40.403 read: IOPS=3824, BW=14.9MiB/s (15.7MB/s)(47.1MiB/3155msec) 00:10:40.403 slat (usec): min=6, max=28937, avg=10.64, stdev=283.50 00:10:40.403 clat (usec): min=167, max=3356, avg=247.23, stdev=37.99 00:10:40.403 lat (usec): min=174, max=29397, avg=257.86, stdev=288.65 00:10:40.403 clat percentiles (usec): 00:10:40.403 | 1.00th=[ 188], 5.00th=[ 204], 10.00th=[ 217], 20.00th=[ 233], 00:10:40.403 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:10:40.403 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 273], 00:10:40.403 | 99.00th=[ 293], 99.50th=[ 318], 99.90th=[ 490], 99.95th=[ 502], 00:10:40.403 | 99.99th=[ 644] 00:10:40.403 bw ( KiB/s): min=15016, max=16126, per=36.31%, avg=15429.00, stdev=389.29, samples=6 00:10:40.403 iops : min= 3754, max= 4031, avg=3857.17, stdev=97.14, samples=6 00:10:40.403 lat (usec) : 250=51.55%, 500=48.38%, 750=0.05% 00:10:40.403 lat (msec) : 4=0.01% 00:10:40.403 cpu : usr=0.79%, sys=3.58%, ctx=12068, majf=0, minf=1 00:10:40.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.403 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.403 issued rwts: total=12065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.403 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1200384: Thu Dec 5 21:03:48 2024 00:10:40.403 read: IOPS=278, BW=1111KiB/s (1138kB/s)(3740KiB/3366msec) 00:10:40.403 slat (usec): min=6, max=15727, avg=37.86, stdev=643.16 00:10:40.403 clat (usec): min=244, max=42930, avg=3537.56, stdev=11071.10 00:10:40.403 lat (usec): min=253, max=56876, avg=3575.44, stdev=11192.88 00:10:40.403 clat percentiles (usec): 00:10:40.403 | 1.00th=[ 249], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 260], 00:10:40.404 | 30.00th=[ 265], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:10:40.404 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[41157], 00:10:40.404 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:10:40.404 | 99.99th=[42730] 00:10:40.404 bw ( KiB/s): min= 96, max= 5737, per=2.45%, avg=1042.83, stdev=2299.67, samples=6 00:10:40.404 iops : min= 24, max= 1434, avg=260.67, stdev=574.81, samples=6 00:10:40.404 lat (usec) : 250=1.39%, 500=90.49% 00:10:40.404 lat (msec) : 50=8.01% 00:10:40.404 cpu : usr=0.06%, sys=0.33%, ctx=938, majf=0, minf=2 00:10:40.404 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.404 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.404 issued rwts: total=936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.404 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.404 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1200385: Thu Dec 5 21:03:48 2024 00:10:40.404 read: IOPS=3943, BW=15.4MiB/s (16.2MB/s)(45.3MiB/2939msec) 00:10:40.404 slat (usec): min=2, max=15231, avg= 9.59, stdev=177.42 00:10:40.404 clat (usec): min=166, max=633, avg=241.16, stdev=27.69 00:10:40.404 lat (usec): min=174, max=15681, avg=250.75, stdev=182.55 00:10:40.404 clat percentiles (usec): 00:10:40.404 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 217], 00:10:40.404 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 249], 00:10:40.404 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:10:40.404 | 99.00th=[ 302], 99.50th=[ 306], 99.90th=[ 371], 99.95th=[ 404], 00:10:40.404 | 99.99th=[ 570] 00:10:40.404 bw ( KiB/s): min=14592, max=17376, per=37.28%, avg=15843.20, stdev=1396.94, samples=5 00:10:40.404 iops : min= 3648, max= 4344, avg=3960.80, stdev=349.24, samples=5 00:10:40.404 lat (usec) : 250=61.65%, 500=38.31%, 750=0.03% 00:10:40.404 cpu : usr=0.78%, sys=3.71%, ctx=11592, majf=0, minf=2 00:10:40.404 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.404 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.404 issued rwts: total=11590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.404 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.404 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1200386: Thu Dec 5 21:03:48 2024 00:10:40.404 read: IOPS=4093, BW=16.0MiB/s (16.8MB/s)(43.6MiB/2730msec) 00:10:40.404 slat (nsec): min=5549, max=56949, avg=7439.65, stdev=982.24 00:10:40.404 clat (usec): min=177, max=1198, avg=233.53, stdev=25.53 00:10:40.404 lat (usec): min=185, max=1209, avg=240.97, stdev=25.68 00:10:40.404 clat percentiles (usec): 00:10:40.404 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 215], 00:10:40.404 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 237], 00:10:40.404 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 273], 00:10:40.404 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 375], 99.95th=[ 490], 00:10:40.404 | 99.99th=[ 676] 00:10:40.404 bw ( KiB/s): min=14912, max=17632, per=38.65%, avg=16425.60, stdev=1209.66, samples=5 00:10:40.404 iops : min= 3728, max= 4408, avg=4106.40, stdev=302.41, samples=5 00:10:40.404 lat (usec) : 250=76.00%, 500=23.96%, 750=0.03% 00:10:40.404 lat (msec) : 2=0.01% 00:10:40.404 cpu : usr=0.92%, sys=3.85%, ctx=11176, majf=0, minf=2 00:10:40.404 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.404 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.404 issued rwts: total=11174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.404 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.404 00:10:40.404 Run status group 0 (all jobs): 00:10:40.404 READ: bw=41.5MiB/s (43.5MB/s), 1111KiB/s-16.0MiB/s (1138kB/s-16.8MB/s), io=140MiB (146MB), run=2730-3366msec 00:10:40.404 00:10:40.404 Disk stats (read/write): 00:10:40.404 nvme0n1: ios=12014/0, merge=0/0, ticks=3473/0, in_queue=3473, util=98.86% 00:10:40.404 nvme0n2: ios=936/0, merge=0/0, ticks=3312/0, in_queue=3312, util=95.63% 00:10:40.404 nvme0n3: ios=11336/0, merge=0/0, ticks=2698/0, in_queue=2698, util=95.67% 00:10:40.404 nvme0n4: ios=10747/0, merge=0/0, ticks=2595/0, in_queue=2595, util=98.74% 00:10:40.660 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.660 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:40.917 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.917 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:41.179 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:41.179 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:41.179 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:41.179 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:41.437 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:41.437 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1200243 00:10:41.437 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:41.437 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:41.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.693 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:41.693 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:41.693 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.693 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:41.693 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:41.693 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.693 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:41.693 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:41.693 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:41.693 nvmf hotplug test: fio failed as expected 00:10:41.693 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.950 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:41.950 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:41.950 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:41.950 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:41.950 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:41.950 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.950 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:41.950 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.950 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:41.950 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.950 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.950 rmmod nvme_tcp 00:10:41.950 rmmod nvme_fabrics 00:10:41.950 rmmod nvme_keyring 00:10:41.950 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.950 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:41.950 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:41.950 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1197322 ']' 00:10:41.950 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1197322 00:10:41.950 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1197322 ']' 00:10:41.951 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1197322 00:10:41.951 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:41.951 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.951 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1197322 00:10:41.951 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.951 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.951 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1197322' 00:10:41.951 killing process with pid 1197322 00:10:41.951 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1197322 00:10:41.951 21:03:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1197322 00:10:42.209 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:42.209 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:42.209 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:42.209 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:42.209 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:42.209 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:42.209 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:42.209 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.209 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:42.209 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.209 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.209 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.112 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:44.112 00:10:44.112 real 0m26.950s 00:10:44.112 user 1m46.802s 00:10:44.112 sys 0m8.992s 00:10:44.112 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.112 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.112 ************************************ 00:10:44.112 END TEST nvmf_fio_target 00:10:44.112 ************************************ 00:10:44.371 21:03:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:44.371 21:03:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.371 21:03:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.371 21:03:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.372 ************************************ 00:10:44.372 START TEST nvmf_bdevio 00:10:44.372 ************************************ 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:44.372 * Looking for test storage... 00:10:44.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:44.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.372 --rc genhtml_branch_coverage=1 00:10:44.372 --rc genhtml_function_coverage=1 00:10:44.372 --rc genhtml_legend=1 00:10:44.372 --rc geninfo_all_blocks=1 00:10:44.372 --rc geninfo_unexecuted_blocks=1 00:10:44.372 00:10:44.372 ' 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:44.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.372 --rc genhtml_branch_coverage=1 00:10:44.372 --rc genhtml_function_coverage=1 00:10:44.372 --rc genhtml_legend=1 00:10:44.372 --rc geninfo_all_blocks=1 00:10:44.372 --rc geninfo_unexecuted_blocks=1 00:10:44.372 00:10:44.372 ' 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:44.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.372 --rc genhtml_branch_coverage=1 00:10:44.372 --rc genhtml_function_coverage=1 00:10:44.372 --rc genhtml_legend=1 00:10:44.372 --rc geninfo_all_blocks=1 00:10:44.372 --rc geninfo_unexecuted_blocks=1 00:10:44.372 00:10:44.372 ' 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:44.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.372 --rc genhtml_branch_coverage=1 00:10:44.372 --rc genhtml_function_coverage=1 00:10:44.372 --rc genhtml_legend=1 00:10:44.372 --rc geninfo_all_blocks=1 00:10:44.372 --rc geninfo_unexecuted_blocks=1 00:10:44.372 00:10:44.372 ' 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.372 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.632 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:44.632 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:44.632 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.632 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.632 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.632 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.632 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.632 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.632 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.632 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.632 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.632 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.632 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.633 21:03:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:51.207 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:51.207 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:51.207 Found net devices under 0000:86:00.0: cvl_0_0 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:51.207 Found net devices under 0000:86:00.1: cvl_0_1 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:51.207 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:51.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:10:51.208 00:10:51.208 --- 10.0.0.2 ping statistics --- 00:10:51.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.208 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:10:51.208 00:10:51.208 --- 10.0.0.1 ping statistics --- 00:10:51.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.208 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1204845 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1204845 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1204845 ']' 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.208 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.208 [2024-12-05 21:03:58.550335] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:10:51.208 [2024-12-05 21:03:58.550394] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.208 [2024-12-05 21:03:58.627995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.208 [2024-12-05 21:03:58.668503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.208 [2024-12-05 21:03:58.668543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.208 [2024-12-05 21:03:58.668549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.208 [2024-12-05 21:03:58.668555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.208 [2024-12-05 21:03:58.668560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.208 [2024-12-05 21:03:58.670160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:51.208 [2024-12-05 21:03:58.670266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:51.208 [2024-12-05 21:03:58.670388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.208 [2024-12-05 21:03:58.670390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.466 [2024-12-05 21:03:59.417281] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.466 Malloc0 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.466 [2024-12-05 21:03:59.490141] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:51.466 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:51.466 { 00:10:51.466 "params": { 00:10:51.466 "name": "Nvme$subsystem", 00:10:51.466 "trtype": "$TEST_TRANSPORT", 00:10:51.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:51.466 "adrfam": "ipv4", 00:10:51.466 "trsvcid": "$NVMF_PORT", 00:10:51.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:51.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:51.466 "hdgst": ${hdgst:-false}, 00:10:51.466 "ddgst": ${ddgst:-false} 00:10:51.466 }, 00:10:51.466 "method": "bdev_nvme_attach_controller" 00:10:51.467 } 00:10:51.467 EOF 00:10:51.467 )") 00:10:51.467 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:51.467 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:51.467 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:51.467 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:51.467 "params": { 00:10:51.467 "name": "Nvme1", 00:10:51.467 "trtype": "tcp", 00:10:51.467 "traddr": "10.0.0.2", 00:10:51.467 "adrfam": "ipv4", 00:10:51.467 "trsvcid": "4420", 00:10:51.467 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:51.467 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:51.467 "hdgst": false, 00:10:51.467 "ddgst": false 00:10:51.467 }, 00:10:51.467 "method": "bdev_nvme_attach_controller" 00:10:51.467 }' 00:10:51.467 [2024-12-05 21:03:59.523758] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:10:51.467 [2024-12-05 21:03:59.523803] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204888 ] 00:10:51.724 [2024-12-05 21:03:59.600060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:51.724 [2024-12-05 21:03:59.644287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.724 [2024-12-05 21:03:59.644407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.724 [2024-12-05 21:03:59.644408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.982 I/O targets: 00:10:51.982 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:51.982 00:10:51.982 00:10:51.982 CUnit - A unit testing framework for C - Version 2.1-3 00:10:51.982 http://cunit.sourceforge.net/ 00:10:51.982 00:10:51.982 00:10:51.982 Suite: bdevio tests on: Nvme1n1 00:10:51.982 Test: blockdev write read block ...passed 00:10:51.982 Test: blockdev write zeroes read block ...passed 00:10:51.982 Test: blockdev write zeroes read no split ...passed 00:10:52.239 Test: blockdev write zeroes read split ...passed 00:10:52.239 Test: blockdev write zeroes read split partial ...passed 00:10:52.239 Test: blockdev reset ...[2024-12-05 21:04:00.125718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:52.239 [2024-12-05 21:04:00.125788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1819f30 (9): Bad file descriptor 00:10:52.239 [2024-12-05 21:04:00.138250] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:52.239 passed 00:10:52.239 Test: blockdev write read 8 blocks ...passed 00:10:52.239 Test: blockdev write read size > 128k ...passed 00:10:52.239 Test: blockdev write read invalid size ...passed 00:10:52.239 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:52.239 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:52.239 Test: blockdev write read max offset ...passed 00:10:52.239 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:52.239 Test: blockdev writev readv 8 blocks ...passed 00:10:52.239 Test: blockdev writev readv 30 x 1block ...passed 00:10:52.239 Test: blockdev writev readv block ...passed 00:10:52.239 Test: blockdev writev readv size > 128k ...passed 00:10:52.239 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:52.239 Test: blockdev comparev and writev ...[2024-12-05 21:04:00.310509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.239 [2024-12-05 21:04:00.310539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:52.239 [2024-12-05 21:04:00.310553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.239 [2024-12-05 21:04:00.310561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:52.239 [2024-12-05 21:04:00.310789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.239 [2024-12-05 21:04:00.310799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:52.239 [2024-12-05 21:04:00.310810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.239 [2024-12-05 21:04:00.310818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:52.239 [2024-12-05 21:04:00.311031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.239 [2024-12-05 21:04:00.311041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:52.239 [2024-12-05 21:04:00.311052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.239 [2024-12-05 21:04:00.311059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:52.239 [2024-12-05 21:04:00.311289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.239 [2024-12-05 21:04:00.311299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:52.239 [2024-12-05 21:04:00.311310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.239 [2024-12-05 21:04:00.311317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:52.496 passed 00:10:52.496 Test: blockdev nvme passthru rw ...passed 00:10:52.496 Test: blockdev nvme passthru vendor specific ...[2024-12-05 21:04:00.393731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.496 [2024-12-05 21:04:00.393746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:52.496 [2024-12-05 21:04:00.393855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.496 [2024-12-05 21:04:00.393865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:52.496 [2024-12-05 21:04:00.393982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.497 [2024-12-05 21:04:00.393991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:52.497 [2024-12-05 21:04:00.394110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.497 [2024-12-05 21:04:00.394123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:52.497 passed 00:10:52.497 Test: blockdev nvme admin passthru ...passed 00:10:52.497 Test: blockdev copy ...passed 00:10:52.497 00:10:52.497 Run Summary: Type Total Ran Passed Failed Inactive 00:10:52.497 suites 1 1 n/a 0 0 00:10:52.497 tests 23 23 23 0 0 00:10:52.497 asserts 152 152 152 0 n/a 00:10:52.497 00:10:52.497 Elapsed time = 0.962 seconds 00:10:52.497 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.497 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.497 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.497 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.497 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:52.497 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:52.497 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.497 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:52.497 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:52.497 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:52.497 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.497 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:52.755 rmmod nvme_tcp 00:10:52.755 rmmod nvme_fabrics 00:10:52.755 rmmod nvme_keyring 00:10:52.755 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.755 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:52.755 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:52.755 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1204845 ']' 00:10:52.755 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1204845 00:10:52.755 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1204845 ']' 00:10:52.755 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1204845 00:10:52.755 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:52.755 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.755 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1204845 00:10:52.755 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:52.755 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:52.755 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1204845' 00:10:52.755 killing process with pid 1204845 00:10:52.755 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1204845 00:10:52.755 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1204845 00:10:53.014 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:53.014 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:53.014 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:53.014 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:53.014 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:53.014 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:53.014 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:53.014 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:53.014 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:53.014 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.014 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.014 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.918 21:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:54.918 00:10:54.918 real 0m10.679s 00:10:54.918 user 0m13.003s 00:10:54.918 sys 0m5.036s 00:10:54.918 21:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.918 21:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:54.918 ************************************ 00:10:54.918 END TEST nvmf_bdevio 00:10:54.918 ************************************ 00:10:54.918 21:04:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:54.918 00:10:54.918 real 4m38.462s 00:10:54.918 user 10m34.956s 00:10:54.918 sys 1m40.398s 00:10:54.918 21:04:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.918 21:04:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:54.918 ************************************ 00:10:54.918 END TEST nvmf_target_core 00:10:54.918 ************************************ 00:10:55.177 21:04:03 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:55.177 21:04:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:55.177 21:04:03 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.177 21:04:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:55.177 ************************************ 00:10:55.177 START TEST nvmf_target_extra 00:10:55.177 ************************************ 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:55.177 * Looking for test storage... 00:10:55.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:55.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.177 --rc genhtml_branch_coverage=1 00:10:55.177 --rc genhtml_function_coverage=1 00:10:55.177 --rc genhtml_legend=1 00:10:55.177 --rc geninfo_all_blocks=1 00:10:55.177 --rc geninfo_unexecuted_blocks=1 00:10:55.177 00:10:55.177 ' 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:55.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.177 --rc genhtml_branch_coverage=1 00:10:55.177 --rc genhtml_function_coverage=1 00:10:55.177 --rc genhtml_legend=1 00:10:55.177 --rc geninfo_all_blocks=1 00:10:55.177 --rc geninfo_unexecuted_blocks=1 00:10:55.177 00:10:55.177 ' 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:55.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.177 --rc genhtml_branch_coverage=1 00:10:55.177 --rc genhtml_function_coverage=1 00:10:55.177 --rc genhtml_legend=1 00:10:55.177 --rc geninfo_all_blocks=1 00:10:55.177 --rc geninfo_unexecuted_blocks=1 00:10:55.177 00:10:55.177 ' 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:55.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.177 --rc genhtml_branch_coverage=1 00:10:55.177 --rc genhtml_function_coverage=1 00:10:55.177 --rc genhtml_legend=1 00:10:55.177 --rc geninfo_all_blocks=1 00:10:55.177 --rc geninfo_unexecuted_blocks=1 00:10:55.177 00:10:55.177 ' 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.177 21:04:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.178 21:04:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.178 21:04:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.178 21:04:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:55.178 21:04:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.178 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:55.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:55.437 ************************************ 00:10:55.437 START TEST nvmf_example 00:10:55.437 ************************************ 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:55.437 * Looking for test storage... 00:10:55.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.437 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:55.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.438 --rc genhtml_branch_coverage=1 00:10:55.438 --rc genhtml_function_coverage=1 00:10:55.438 --rc genhtml_legend=1 00:10:55.438 --rc geninfo_all_blocks=1 00:10:55.438 --rc geninfo_unexecuted_blocks=1 00:10:55.438 00:10:55.438 ' 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:55.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.438 --rc genhtml_branch_coverage=1 00:10:55.438 --rc genhtml_function_coverage=1 00:10:55.438 --rc genhtml_legend=1 00:10:55.438 --rc geninfo_all_blocks=1 00:10:55.438 --rc geninfo_unexecuted_blocks=1 00:10:55.438 00:10:55.438 ' 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:55.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.438 --rc genhtml_branch_coverage=1 00:10:55.438 --rc genhtml_function_coverage=1 00:10:55.438 --rc genhtml_legend=1 00:10:55.438 --rc geninfo_all_blocks=1 00:10:55.438 --rc geninfo_unexecuted_blocks=1 00:10:55.438 00:10:55.438 ' 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:55.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.438 --rc genhtml_branch_coverage=1 00:10:55.438 --rc genhtml_function_coverage=1 00:10:55.438 --rc genhtml_legend=1 00:10:55.438 --rc geninfo_all_blocks=1 00:10:55.438 --rc geninfo_unexecuted_blocks=1 00:10:55.438 00:10:55.438 ' 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:55.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:55.438 21:04:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:02.137 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:02.138 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:02.138 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:02.138 Found net devices under 0000:86:00.0: cvl_0_0 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:02.138 Found net devices under 0000:86:00.1: cvl_0_1 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:02.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:11:02.138 00:11:02.138 --- 10.0.0.2 ping statistics --- 00:11:02.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.138 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:11:02.138 00:11:02.138 --- 10.0.0.1 ping statistics --- 00:11:02.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.138 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1208774 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1208774 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1208774 ']' 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.138 21:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:02.408 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.408 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:02.408 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:02.408 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:02.408 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:02.408 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:02.408 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.408 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:02.665 21:04:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:14.849 Initializing NVMe Controllers 00:11:14.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:14.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:14.849 Initialization complete. Launching workers. 00:11:14.849 ======================================================== 00:11:14.849 Latency(us) 00:11:14.849 Device Information : IOPS MiB/s Average min max 00:11:14.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18592.51 72.63 3441.63 540.08 15595.74 00:11:14.849 ======================================================== 00:11:14.849 Total : 18592.51 72.63 3441.63 540.08 15595.74 00:11:14.849 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:14.849 rmmod nvme_tcp 00:11:14.849 rmmod nvme_fabrics 00:11:14.849 rmmod nvme_keyring 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1208774 ']' 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1208774 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1208774 ']' 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1208774 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1208774 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1208774' 00:11:14.849 killing process with pid 1208774 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1208774 00:11:14.849 21:04:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1208774 00:11:14.849 nvmf threads initialize successfully 00:11:14.849 bdev subsystem init successfully 00:11:14.849 created a nvmf target service 00:11:14.849 create targets's poll groups done 00:11:14.849 all subsystems of target started 00:11:14.849 nvmf target is running 00:11:14.849 all subsystems of target stopped 00:11:14.849 destroy targets's poll groups done 00:11:14.849 destroyed the nvmf target service 00:11:14.849 bdev subsystem finish successfully 00:11:14.849 nvmf threads destroy successfully 00:11:14.849 21:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:14.849 21:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:14.849 21:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:14.849 21:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:14.849 21:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:14.849 21:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:14.849 21:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:14.849 21:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:14.849 21:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:14.849 21:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.849 21:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.849 21:04:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.109 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:15.109 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:15.109 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:15.109 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.368 00:11:15.368 real 0m19.920s 00:11:15.368 user 0m46.217s 00:11:15.368 sys 0m6.157s 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.368 ************************************ 00:11:15.368 END TEST nvmf_example 00:11:15.368 ************************************ 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:15.368 ************************************ 00:11:15.368 START TEST nvmf_filesystem 00:11:15.368 ************************************ 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:15.368 * Looking for test storage... 00:11:15.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.368 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:15.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.631 --rc genhtml_branch_coverage=1 00:11:15.631 --rc genhtml_function_coverage=1 00:11:15.631 --rc genhtml_legend=1 00:11:15.631 --rc geninfo_all_blocks=1 00:11:15.631 --rc geninfo_unexecuted_blocks=1 00:11:15.631 00:11:15.631 ' 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:15.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.631 --rc genhtml_branch_coverage=1 00:11:15.631 --rc genhtml_function_coverage=1 00:11:15.631 --rc genhtml_legend=1 00:11:15.631 --rc geninfo_all_blocks=1 00:11:15.631 --rc geninfo_unexecuted_blocks=1 00:11:15.631 00:11:15.631 ' 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:15.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.631 --rc genhtml_branch_coverage=1 00:11:15.631 --rc genhtml_function_coverage=1 00:11:15.631 --rc genhtml_legend=1 00:11:15.631 --rc geninfo_all_blocks=1 00:11:15.631 --rc geninfo_unexecuted_blocks=1 00:11:15.631 00:11:15.631 ' 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:15.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.631 --rc genhtml_branch_coverage=1 00:11:15.631 --rc genhtml_function_coverage=1 00:11:15.631 --rc genhtml_legend=1 00:11:15.631 --rc geninfo_all_blocks=1 00:11:15.631 --rc geninfo_unexecuted_blocks=1 00:11:15.631 00:11:15.631 ' 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:15.631 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:15.632 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:15.633 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:15.633 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:15.633 #define SPDK_CONFIG_H 00:11:15.633 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:15.633 #define SPDK_CONFIG_APPS 1 00:11:15.633 #define SPDK_CONFIG_ARCH native 00:11:15.633 #undef SPDK_CONFIG_ASAN 00:11:15.633 #undef SPDK_CONFIG_AVAHI 00:11:15.633 #undef SPDK_CONFIG_CET 00:11:15.633 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:15.633 #define SPDK_CONFIG_COVERAGE 1 00:11:15.633 #define SPDK_CONFIG_CROSS_PREFIX 00:11:15.633 #undef SPDK_CONFIG_CRYPTO 00:11:15.633 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:15.633 #undef SPDK_CONFIG_CUSTOMOCF 00:11:15.633 #undef SPDK_CONFIG_DAOS 00:11:15.633 #define SPDK_CONFIG_DAOS_DIR 00:11:15.633 #define SPDK_CONFIG_DEBUG 1 00:11:15.633 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:15.633 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:15.633 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:15.633 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:15.633 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:15.633 #undef SPDK_CONFIG_DPDK_UADK 00:11:15.633 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:15.633 #define SPDK_CONFIG_EXAMPLES 1 00:11:15.633 #undef SPDK_CONFIG_FC 00:11:15.633 #define SPDK_CONFIG_FC_PATH 00:11:15.633 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:15.633 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:15.633 #define SPDK_CONFIG_FSDEV 1 00:11:15.633 #undef SPDK_CONFIG_FUSE 00:11:15.633 #undef SPDK_CONFIG_FUZZER 00:11:15.633 #define SPDK_CONFIG_FUZZER_LIB 00:11:15.633 #undef SPDK_CONFIG_GOLANG 00:11:15.633 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:15.633 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:15.633 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:15.633 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:15.633 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:15.633 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:15.633 #undef SPDK_CONFIG_HAVE_LZ4 00:11:15.633 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:15.633 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:15.633 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:15.633 #define SPDK_CONFIG_IDXD 1 00:11:15.633 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:15.633 #undef SPDK_CONFIG_IPSEC_MB 00:11:15.633 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:15.633 #define SPDK_CONFIG_ISAL 1 00:11:15.633 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:15.633 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:15.633 #define SPDK_CONFIG_LIBDIR 00:11:15.633 #undef SPDK_CONFIG_LTO 00:11:15.633 #define SPDK_CONFIG_MAX_LCORES 128 00:11:15.633 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:15.633 #define SPDK_CONFIG_NVME_CUSE 1 00:11:15.633 #undef SPDK_CONFIG_OCF 00:11:15.633 #define SPDK_CONFIG_OCF_PATH 00:11:15.633 #define SPDK_CONFIG_OPENSSL_PATH 00:11:15.633 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:15.633 #define SPDK_CONFIG_PGO_DIR 00:11:15.633 #undef SPDK_CONFIG_PGO_USE 00:11:15.633 #define SPDK_CONFIG_PREFIX /usr/local 00:11:15.633 #undef SPDK_CONFIG_RAID5F 00:11:15.633 #undef SPDK_CONFIG_RBD 00:11:15.633 #define SPDK_CONFIG_RDMA 1 00:11:15.633 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:15.633 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:15.633 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:15.633 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:15.633 #define SPDK_CONFIG_SHARED 1 00:11:15.633 #undef SPDK_CONFIG_SMA 00:11:15.633 #define SPDK_CONFIG_TESTS 1 00:11:15.633 #undef SPDK_CONFIG_TSAN 00:11:15.633 #define SPDK_CONFIG_UBLK 1 00:11:15.633 #define SPDK_CONFIG_UBSAN 1 00:11:15.633 #undef SPDK_CONFIG_UNIT_TESTS 00:11:15.633 #undef SPDK_CONFIG_URING 00:11:15.633 #define SPDK_CONFIG_URING_PATH 00:11:15.633 #undef SPDK_CONFIG_URING_ZNS 00:11:15.633 #undef SPDK_CONFIG_USDT 00:11:15.633 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:15.633 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:15.633 #define SPDK_CONFIG_VFIO_USER 1 00:11:15.633 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:15.633 #define SPDK_CONFIG_VHOST 1 00:11:15.633 #define SPDK_CONFIG_VIRTIO 1 00:11:15.633 #undef SPDK_CONFIG_VTUNE 00:11:15.633 #define SPDK_CONFIG_VTUNE_DIR 00:11:15.633 #define SPDK_CONFIG_WERROR 1 00:11:15.633 #define SPDK_CONFIG_WPDK_DIR 00:11:15.633 #undef SPDK_CONFIG_XNVME 00:11:15.633 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:15.633 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:15.633 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:15.633 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:15.633 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.633 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.633 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.633 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.633 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.633 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.633 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:15.634 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:15.635 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:15.636 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1211137 ]] 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1211137 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.3N5utk 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.3N5utk/tests/target /tmp/spdk.3N5utk 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189545918464 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963969536 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6418051072 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:15.637 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971953664 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981984768 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169753088 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192797184 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981538304 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981984768 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=446464 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:15.638 * Looking for test storage... 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189545918464 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8632643584 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:15.638 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:15.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.639 --rc genhtml_branch_coverage=1 00:11:15.639 --rc genhtml_function_coverage=1 00:11:15.639 --rc genhtml_legend=1 00:11:15.639 --rc geninfo_all_blocks=1 00:11:15.639 --rc geninfo_unexecuted_blocks=1 00:11:15.639 00:11:15.639 ' 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:15.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.639 --rc genhtml_branch_coverage=1 00:11:15.639 --rc genhtml_function_coverage=1 00:11:15.639 --rc genhtml_legend=1 00:11:15.639 --rc geninfo_all_blocks=1 00:11:15.639 --rc geninfo_unexecuted_blocks=1 00:11:15.639 00:11:15.639 ' 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:15.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.639 --rc genhtml_branch_coverage=1 00:11:15.639 --rc genhtml_function_coverage=1 00:11:15.639 --rc genhtml_legend=1 00:11:15.639 --rc geninfo_all_blocks=1 00:11:15.639 --rc geninfo_unexecuted_blocks=1 00:11:15.639 00:11:15.639 ' 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:15.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.639 --rc genhtml_branch_coverage=1 00:11:15.639 --rc genhtml_function_coverage=1 00:11:15.639 --rc genhtml_legend=1 00:11:15.639 --rc geninfo_all_blocks=1 00:11:15.639 --rc geninfo_unexecuted_blocks=1 00:11:15.639 00:11:15.639 ' 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.639 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:15.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:15.899 21:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.466 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.466 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:22.466 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:22.466 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:22.466 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:22.466 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:22.466 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:22.466 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:22.466 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:22.466 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:22.466 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:22.466 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:22.466 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:22.466 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:22.466 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:22.466 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.466 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:22.467 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:22.467 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:22.467 Found net devices under 0000:86:00.0: cvl_0_0 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:22.467 Found net devices under 0000:86:00.1: cvl_0_1 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:22.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:11:22.467 00:11:22.467 --- 10.0.0.2 ping statistics --- 00:11:22.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.467 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:22.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:11:22.467 00:11:22.467 --- 10.0.0.1 ping statistics --- 00:11:22.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.467 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.467 ************************************ 00:11:22.467 START TEST nvmf_filesystem_no_in_capsule 00:11:22.467 ************************************ 00:11:22.467 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:22.468 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:22.468 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:22.468 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:22.468 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.468 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.468 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1214393 00:11:22.468 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1214393 00:11:22.468 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:22.468 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1214393 ']' 00:11:22.468 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.468 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.468 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.468 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.468 21:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.468 [2024-12-05 21:04:29.918008] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:11:22.468 [2024-12-05 21:04:29.918055] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.468 [2024-12-05 21:04:29.998651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.468 [2024-12-05 21:04:30.044764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.468 [2024-12-05 21:04:30.044797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.468 [2024-12-05 21:04:30.044804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.468 [2024-12-05 21:04:30.044811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.468 [2024-12-05 21:04:30.044816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.468 [2024-12-05 21:04:30.046216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.468 [2024-12-05 21:04:30.046341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.468 [2024-12-05 21:04:30.046357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.468 [2024-12-05 21:04:30.046360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.726 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.726 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:22.726 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:22.726 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:22.726 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.726 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.726 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:22.726 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:22.726 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.726 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.726 [2024-12-05 21:04:30.831238] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.984 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.984 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:22.984 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.984 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.985 Malloc1 00:11:22.985 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.985 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:22.985 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.985 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.985 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.985 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:22.985 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.985 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.985 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.985 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.985 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.985 21:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.985 [2024-12-05 21:04:31.002927] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.985 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.985 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:22.985 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:22.985 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:22.985 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:22.985 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:22.985 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:22.985 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.985 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.985 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.985 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:22.985 { 00:11:22.985 "name": "Malloc1", 00:11:22.985 "aliases": [ 00:11:22.985 "ff0187df-417c-40c8-89b6-64a419946057" 00:11:22.985 ], 00:11:22.985 "product_name": "Malloc disk", 00:11:22.985 "block_size": 512, 00:11:22.985 "num_blocks": 1048576, 00:11:22.985 "uuid": "ff0187df-417c-40c8-89b6-64a419946057", 00:11:22.985 "assigned_rate_limits": { 00:11:22.985 "rw_ios_per_sec": 0, 00:11:22.985 "rw_mbytes_per_sec": 0, 00:11:22.985 "r_mbytes_per_sec": 0, 00:11:22.985 "w_mbytes_per_sec": 0 00:11:22.985 }, 00:11:22.985 "claimed": true, 00:11:22.985 "claim_type": "exclusive_write", 00:11:22.985 "zoned": false, 00:11:22.985 "supported_io_types": { 00:11:22.985 "read": true, 00:11:22.985 "write": true, 00:11:22.985 "unmap": true, 00:11:22.985 "flush": true, 00:11:22.985 "reset": true, 00:11:22.985 "nvme_admin": false, 00:11:22.985 "nvme_io": false, 00:11:22.985 "nvme_io_md": false, 00:11:22.985 "write_zeroes": true, 00:11:22.985 "zcopy": true, 00:11:22.985 "get_zone_info": false, 00:11:22.985 "zone_management": false, 00:11:22.985 "zone_append": false, 00:11:22.985 "compare": false, 00:11:22.985 "compare_and_write": false, 00:11:22.985 "abort": true, 00:11:22.985 "seek_hole": false, 00:11:22.985 "seek_data": false, 00:11:22.985 "copy": true, 00:11:22.985 "nvme_iov_md": false 00:11:22.985 }, 00:11:22.985 "memory_domains": [ 00:11:22.985 { 00:11:22.985 "dma_device_id": "system", 00:11:22.985 "dma_device_type": 1 00:11:22.985 }, 00:11:22.985 { 00:11:22.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.985 "dma_device_type": 2 00:11:22.985 } 00:11:22.985 ], 00:11:22.985 "driver_specific": {} 00:11:22.985 } 00:11:22.985 ]' 00:11:22.985 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:22.985 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:22.985 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:23.243 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:23.243 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:23.243 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:23.243 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:23.243 21:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:24.172 21:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:24.172 21:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:24.172 21:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:24.172 21:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:24.172 21:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:26.694 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:28.060 21:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:28.060 21:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:28.060 21:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:28.060 21:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.060 21:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.060 ************************************ 00:11:28.060 START TEST filesystem_ext4 00:11:28.060 ************************************ 00:11:28.060 21:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:28.060 21:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:28.060 21:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:28.060 21:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:28.060 21:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:28.060 21:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:28.060 21:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:28.060 21:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:28.060 21:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:28.060 21:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:28.060 21:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:28.060 mke2fs 1.47.0 (5-Feb-2023) 00:11:28.060 Discarding device blocks: 0/522240 done 00:11:28.060 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:28.060 Filesystem UUID: 88a79877-2aca-42e5-a6f3-075918cf6d59 00:11:28.060 Superblock backups stored on blocks: 00:11:28.060 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:28.060 00:11:28.060 Allocating group tables: 0/64 done 00:11:28.060 Writing inode tables: 0/64 done 00:11:28.060 Creating journal (8192 blocks): done 00:11:28.060 Writing superblocks and filesystem accounting information: 0/64 done 00:11:28.060 00:11:28.060 21:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:28.060 21:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:33.324 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:33.324 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:33.324 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:33.324 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:33.324 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:33.324 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1214393 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:33.583 00:11:33.583 real 0m5.670s 00:11:33.583 user 0m0.033s 00:11:33.583 sys 0m0.062s 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:33.583 ************************************ 00:11:33.583 END TEST filesystem_ext4 00:11:33.583 ************************************ 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.583 ************************************ 00:11:33.583 START TEST filesystem_btrfs 00:11:33.583 ************************************ 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:33.583 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:33.584 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:33.584 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:33.584 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:33.842 btrfs-progs v6.8.1 00:11:33.842 See https://btrfs.readthedocs.io for more information. 00:11:33.842 00:11:33.842 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:33.842 NOTE: several default settings have changed in version 5.15, please make sure 00:11:33.842 this does not affect your deployments: 00:11:33.842 - DUP for metadata (-m dup) 00:11:33.842 - enabled no-holes (-O no-holes) 00:11:33.842 - enabled free-space-tree (-R free-space-tree) 00:11:33.842 00:11:33.842 Label: (null) 00:11:33.842 UUID: f534ee51-3df5-41a1-b545-52579d7a3f91 00:11:33.842 Node size: 16384 00:11:33.842 Sector size: 4096 (CPU page size: 4096) 00:11:33.842 Filesystem size: 510.00MiB 00:11:33.842 Block group profiles: 00:11:33.842 Data: single 8.00MiB 00:11:33.842 Metadata: DUP 32.00MiB 00:11:33.842 System: DUP 8.00MiB 00:11:33.842 SSD detected: yes 00:11:33.842 Zoned device: no 00:11:33.842 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:33.842 Checksum: crc32c 00:11:33.842 Number of devices: 1 00:11:33.842 Devices: 00:11:33.842 ID SIZE PATH 00:11:33.842 1 510.00MiB /dev/nvme0n1p1 00:11:33.842 00:11:33.842 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:33.842 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:33.842 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:33.842 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:33.842 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:33.842 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:33.842 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:33.842 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:34.100 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1214393 00:11:34.100 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:34.100 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:34.100 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:34.100 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:34.100 00:11:34.100 real 0m0.457s 00:11:34.100 user 0m0.027s 00:11:34.100 sys 0m0.110s 00:11:34.100 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.100 21:04:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:34.100 ************************************ 00:11:34.100 END TEST filesystem_btrfs 00:11:34.100 ************************************ 00:11:34.100 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:34.100 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:34.100 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.100 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.100 ************************************ 00:11:34.100 START TEST filesystem_xfs 00:11:34.100 ************************************ 00:11:34.100 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:34.100 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:34.100 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:34.100 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:34.100 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:34.100 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:34.100 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:34.100 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:34.100 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:34.100 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:34.100 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:34.100 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:34.100 = sectsz=512 attr=2, projid32bit=1 00:11:34.100 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:34.100 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:34.100 data = bsize=4096 blocks=130560, imaxpct=25 00:11:34.100 = sunit=0 swidth=0 blks 00:11:34.100 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:34.100 log =internal log bsize=4096 blocks=16384, version=2 00:11:34.100 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:34.100 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:35.034 Discarding blocks...Done. 00:11:35.034 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:35.034 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:36.933 21:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:37.190 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:37.190 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:37.190 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:37.190 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1214393 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:37.191 00:11:37.191 real 0m3.057s 00:11:37.191 user 0m0.026s 00:11:37.191 sys 0m0.073s 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:37.191 ************************************ 00:11:37.191 END TEST filesystem_xfs 00:11:37.191 ************************************ 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:37.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.191 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.449 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.449 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:37.449 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1214393 00:11:37.449 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1214393 ']' 00:11:37.449 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1214393 00:11:37.449 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:37.449 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.449 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1214393 00:11:37.449 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.449 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.449 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1214393' 00:11:37.449 killing process with pid 1214393 00:11:37.449 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1214393 00:11:37.449 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1214393 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:37.708 00:11:37.708 real 0m15.828s 00:11:37.708 user 1m2.368s 00:11:37.708 sys 0m1.394s 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.708 ************************************ 00:11:37.708 END TEST nvmf_filesystem_no_in_capsule 00:11:37.708 ************************************ 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.708 ************************************ 00:11:37.708 START TEST nvmf_filesystem_in_capsule 00:11:37.708 ************************************ 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1217153 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1217153 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1217153 ']' 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.708 21:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.708 [2024-12-05 21:04:45.815731] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:11:37.708 [2024-12-05 21:04:45.815777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.966 [2024-12-05 21:04:45.896011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.966 [2024-12-05 21:04:45.934927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.966 [2024-12-05 21:04:45.934968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.966 [2024-12-05 21:04:45.934976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.966 [2024-12-05 21:04:45.934982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.966 [2024-12-05 21:04:45.934987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.966 [2024-12-05 21:04:45.936431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.966 [2024-12-05 21:04:45.936538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.966 [2024-12-05 21:04:45.936647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.966 [2024-12-05 21:04:45.936648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.966 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.966 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:37.966 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:37.966 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:37.966 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.224 [2024-12-05 21:04:46.082859] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.224 Malloc1 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.224 [2024-12-05 21:04:46.244401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.224 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:38.224 { 00:11:38.224 "name": "Malloc1", 00:11:38.224 "aliases": [ 00:11:38.224 "59c3695c-d803-4191-b228-b6780f31dee1" 00:11:38.224 ], 00:11:38.224 "product_name": "Malloc disk", 00:11:38.224 "block_size": 512, 00:11:38.224 "num_blocks": 1048576, 00:11:38.224 "uuid": "59c3695c-d803-4191-b228-b6780f31dee1", 00:11:38.224 "assigned_rate_limits": { 00:11:38.224 "rw_ios_per_sec": 0, 00:11:38.224 "rw_mbytes_per_sec": 0, 00:11:38.224 "r_mbytes_per_sec": 0, 00:11:38.224 "w_mbytes_per_sec": 0 00:11:38.224 }, 00:11:38.224 "claimed": true, 00:11:38.224 "claim_type": "exclusive_write", 00:11:38.224 "zoned": false, 00:11:38.224 "supported_io_types": { 00:11:38.224 "read": true, 00:11:38.224 "write": true, 00:11:38.224 "unmap": true, 00:11:38.224 "flush": true, 00:11:38.224 "reset": true, 00:11:38.224 "nvme_admin": false, 00:11:38.224 "nvme_io": false, 00:11:38.224 "nvme_io_md": false, 00:11:38.224 "write_zeroes": true, 00:11:38.224 "zcopy": true, 00:11:38.224 "get_zone_info": false, 00:11:38.224 "zone_management": false, 00:11:38.225 "zone_append": false, 00:11:38.225 "compare": false, 00:11:38.225 "compare_and_write": false, 00:11:38.225 "abort": true, 00:11:38.225 "seek_hole": false, 00:11:38.225 "seek_data": false, 00:11:38.225 "copy": true, 00:11:38.225 "nvme_iov_md": false 00:11:38.225 }, 00:11:38.225 "memory_domains": [ 00:11:38.225 { 00:11:38.225 "dma_device_id": "system", 00:11:38.225 "dma_device_type": 1 00:11:38.225 }, 00:11:38.225 { 00:11:38.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.225 "dma_device_type": 2 00:11:38.225 } 00:11:38.225 ], 00:11:38.225 "driver_specific": {} 00:11:38.225 } 00:11:38.225 ]' 00:11:38.225 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:38.225 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:38.225 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:38.482 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:38.482 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:38.482 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:38.482 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:38.482 21:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.857 21:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:39.857 21:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:39.857 21:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:39.857 21:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:39.857 21:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:41.755 21:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:41.755 21:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:41.755 21:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:41.755 21:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:41.755 21:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:41.755 21:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:41.755 21:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:41.755 21:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:41.755 21:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:41.755 21:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:41.755 21:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:41.755 21:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:41.755 21:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:41.755 21:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:41.755 21:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:41.755 21:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:41.755 21:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:41.755 21:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:42.013 21:04:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:43.405 21:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:43.405 21:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:43.405 21:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:43.405 21:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.405 21:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.405 ************************************ 00:11:43.405 START TEST filesystem_in_capsule_ext4 00:11:43.405 ************************************ 00:11:43.405 21:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:43.405 21:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:43.405 21:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:43.406 21:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:43.406 21:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:43.406 21:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:43.406 21:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:43.406 21:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:43.406 21:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:43.406 21:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:43.406 21:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:43.406 mke2fs 1.47.0 (5-Feb-2023) 00:11:43.406 Discarding device blocks: 0/522240 done 00:11:43.406 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:43.406 Filesystem UUID: 86ab9058-1534-4dff-bec1-ca650fe69021 00:11:43.406 Superblock backups stored on blocks: 00:11:43.406 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:43.406 00:11:43.406 Allocating group tables: 0/64 done 00:11:43.406 Writing inode tables: 0/64 done 00:11:43.406 Creating journal (8192 blocks): done 00:11:43.406 Writing superblocks and filesystem accounting information: 0/64 done 00:11:43.406 00:11:43.406 21:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:43.406 21:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:48.653 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1217153 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:48.911 00:11:48.911 real 0m5.709s 00:11:48.911 user 0m0.024s 00:11:48.911 sys 0m0.072s 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:48.911 ************************************ 00:11:48.911 END TEST filesystem_in_capsule_ext4 00:11:48.911 ************************************ 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.911 ************************************ 00:11:48.911 START TEST filesystem_in_capsule_btrfs 00:11:48.911 ************************************ 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:48.911 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:49.169 btrfs-progs v6.8.1 00:11:49.169 See https://btrfs.readthedocs.io for more information. 00:11:49.169 00:11:49.169 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:49.169 NOTE: several default settings have changed in version 5.15, please make sure 00:11:49.169 this does not affect your deployments: 00:11:49.169 - DUP for metadata (-m dup) 00:11:49.169 - enabled no-holes (-O no-holes) 00:11:49.169 - enabled free-space-tree (-R free-space-tree) 00:11:49.169 00:11:49.169 Label: (null) 00:11:49.169 UUID: 959e7546-1c47-4c2f-b832-3f047042e602 00:11:49.169 Node size: 16384 00:11:49.169 Sector size: 4096 (CPU page size: 4096) 00:11:49.169 Filesystem size: 510.00MiB 00:11:49.169 Block group profiles: 00:11:49.169 Data: single 8.00MiB 00:11:49.169 Metadata: DUP 32.00MiB 00:11:49.169 System: DUP 8.00MiB 00:11:49.169 SSD detected: yes 00:11:49.169 Zoned device: no 00:11:49.169 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:49.169 Checksum: crc32c 00:11:49.169 Number of devices: 1 00:11:49.169 Devices: 00:11:49.169 ID SIZE PATH 00:11:49.169 1 510.00MiB /dev/nvme0n1p1 00:11:49.169 00:11:49.169 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:49.169 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.426 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.426 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:49.426 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.426 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:49.426 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:49.426 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.426 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1217153 00:11:49.426 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.426 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.426 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.426 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.426 00:11:49.426 real 0m0.575s 00:11:49.426 user 0m0.029s 00:11:49.426 sys 0m0.114s 00:11:49.426 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.426 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:49.426 ************************************ 00:11:49.426 END TEST filesystem_in_capsule_btrfs 00:11:49.426 ************************************ 00:11:49.426 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:49.426 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:49.426 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.426 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.684 ************************************ 00:11:49.684 START TEST filesystem_in_capsule_xfs 00:11:49.684 ************************************ 00:11:49.684 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:49.684 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:49.684 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:49.684 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:49.684 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:49.684 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:49.684 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:49.684 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:49.684 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:49.684 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:49.684 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:49.684 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:49.684 = sectsz=512 attr=2, projid32bit=1 00:11:49.684 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:49.684 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:49.684 data = bsize=4096 blocks=130560, imaxpct=25 00:11:49.684 = sunit=0 swidth=0 blks 00:11:49.684 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:49.684 log =internal log bsize=4096 blocks=16384, version=2 00:11:49.684 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:49.684 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:51.057 Discarding blocks...Done. 00:11:51.057 21:04:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:51.057 21:04:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1217153 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:53.585 00:11:53.585 real 0m3.696s 00:11:53.585 user 0m0.022s 00:11:53.585 sys 0m0.078s 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:53.585 ************************************ 00:11:53.585 END TEST filesystem_in_capsule_xfs 00:11:53.585 ************************************ 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1217153 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1217153 ']' 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1217153 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1217153 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1217153' 00:11:53.585 killing process with pid 1217153 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1217153 00:11:53.585 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1217153 00:11:53.843 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:53.843 00:11:53.843 real 0m16.188s 00:11:53.843 user 1m3.651s 00:11:53.843 sys 0m1.410s 00:11:53.843 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.843 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.843 ************************************ 00:11:53.843 END TEST nvmf_filesystem_in_capsule 00:11:53.843 ************************************ 00:11:54.102 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:54.102 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.102 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:54.102 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:54.102 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:54.102 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.102 21:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:54.102 rmmod nvme_tcp 00:11:54.102 rmmod nvme_fabrics 00:11:54.102 rmmod nvme_keyring 00:11:54.102 21:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.102 21:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:54.102 21:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:54.102 21:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:54.102 21:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.102 21:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:54.102 21:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:54.102 21:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:54.102 21:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:54.102 21:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:54.102 21:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:54.102 21:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:54.102 21:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:54.102 21:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.102 21:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.102 21:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.008 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:56.266 00:11:56.266 real 0m40.802s 00:11:56.266 user 2m8.021s 00:11:56.266 sys 0m7.606s 00:11:56.266 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.266 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:56.266 ************************************ 00:11:56.266 END TEST nvmf_filesystem 00:11:56.266 ************************************ 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:56.267 ************************************ 00:11:56.267 START TEST nvmf_target_discovery 00:11:56.267 ************************************ 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:56.267 * Looking for test storage... 00:11:56.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:56.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.267 --rc genhtml_branch_coverage=1 00:11:56.267 --rc genhtml_function_coverage=1 00:11:56.267 --rc genhtml_legend=1 00:11:56.267 --rc geninfo_all_blocks=1 00:11:56.267 --rc geninfo_unexecuted_blocks=1 00:11:56.267 00:11:56.267 ' 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:56.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.267 --rc genhtml_branch_coverage=1 00:11:56.267 --rc genhtml_function_coverage=1 00:11:56.267 --rc genhtml_legend=1 00:11:56.267 --rc geninfo_all_blocks=1 00:11:56.267 --rc geninfo_unexecuted_blocks=1 00:11:56.267 00:11:56.267 ' 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:56.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.267 --rc genhtml_branch_coverage=1 00:11:56.267 --rc genhtml_function_coverage=1 00:11:56.267 --rc genhtml_legend=1 00:11:56.267 --rc geninfo_all_blocks=1 00:11:56.267 --rc geninfo_unexecuted_blocks=1 00:11:56.267 00:11:56.267 ' 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:56.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.267 --rc genhtml_branch_coverage=1 00:11:56.267 --rc genhtml_function_coverage=1 00:11:56.267 --rc genhtml_legend=1 00:11:56.267 --rc geninfo_all_blocks=1 00:11:56.267 --rc geninfo_unexecuted_blocks=1 00:11:56.267 00:11:56.267 ' 00:11:56.267 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:56.528 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:56.529 21:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:03.100 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:03.100 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:03.100 Found net devices under 0000:86:00.0: cvl_0_0 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:03.100 Found net devices under 0000:86:00.1: cvl_0_1 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:03.100 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:03.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:12:03.101 00:12:03.101 --- 10.0.0.2 ping statistics --- 00:12:03.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.101 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:03.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:12:03.101 00:12:03.101 --- 10.0.0.1 ping statistics --- 00:12:03.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.101 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1223654 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1223654 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1223654 ']' 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 [2024-12-05 21:05:10.437249] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:12:03.101 [2024-12-05 21:05:10.437300] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.101 [2024-12-05 21:05:10.515839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.101 [2024-12-05 21:05:10.556778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.101 [2024-12-05 21:05:10.556817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.101 [2024-12-05 21:05:10.556824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.101 [2024-12-05 21:05:10.556832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.101 [2024-12-05 21:05:10.556837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.101 [2024-12-05 21:05:10.558316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.101 [2024-12-05 21:05:10.558424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.101 [2024-12-05 21:05:10.558461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.101 [2024-12-05 21:05:10.558462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 [2024-12-05 21:05:10.708707] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 Null1 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 [2024-12-05 21:05:10.764543] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 Null2 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 Null3 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 Null4 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.101 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.102 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.102 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:03.102 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.102 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.102 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.102 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:03.102 00:12:03.102 Discovery Log Number of Records 6, Generation counter 6 00:12:03.102 =====Discovery Log Entry 0====== 00:12:03.102 trtype: tcp 00:12:03.102 adrfam: ipv4 00:12:03.102 subtype: current discovery subsystem 00:12:03.102 treq: not required 00:12:03.102 portid: 0 00:12:03.102 trsvcid: 4420 00:12:03.102 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:03.102 traddr: 10.0.0.2 00:12:03.102 eflags: explicit discovery connections, duplicate discovery information 00:12:03.102 sectype: none 00:12:03.102 =====Discovery Log Entry 1====== 00:12:03.102 trtype: tcp 00:12:03.102 adrfam: ipv4 00:12:03.102 subtype: nvme subsystem 00:12:03.102 treq: not required 00:12:03.102 portid: 0 00:12:03.102 trsvcid: 4420 00:12:03.102 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:03.102 traddr: 10.0.0.2 00:12:03.102 eflags: none 00:12:03.102 sectype: none 00:12:03.102 =====Discovery Log Entry 2====== 00:12:03.102 trtype: tcp 00:12:03.102 adrfam: ipv4 00:12:03.102 subtype: nvme subsystem 00:12:03.102 treq: not required 00:12:03.102 portid: 0 00:12:03.102 trsvcid: 4420 00:12:03.102 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:03.102 traddr: 10.0.0.2 00:12:03.102 eflags: none 00:12:03.102 sectype: none 00:12:03.102 =====Discovery Log Entry 3====== 00:12:03.102 trtype: tcp 00:12:03.102 adrfam: ipv4 00:12:03.102 subtype: nvme subsystem 00:12:03.102 treq: not required 00:12:03.102 portid: 0 00:12:03.102 trsvcid: 4420 00:12:03.102 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:03.102 traddr: 10.0.0.2 00:12:03.102 eflags: none 00:12:03.102 sectype: none 00:12:03.102 =====Discovery Log Entry 4====== 00:12:03.102 trtype: tcp 00:12:03.102 adrfam: ipv4 00:12:03.102 subtype: nvme subsystem 00:12:03.102 treq: not required 00:12:03.102 portid: 0 00:12:03.102 trsvcid: 4420 00:12:03.102 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:03.102 traddr: 10.0.0.2 00:12:03.102 eflags: none 00:12:03.102 sectype: none 00:12:03.102 =====Discovery Log Entry 5====== 00:12:03.102 trtype: tcp 00:12:03.102 adrfam: ipv4 00:12:03.102 subtype: discovery subsystem referral 00:12:03.102 treq: not required 00:12:03.102 portid: 0 00:12:03.102 trsvcid: 4430 00:12:03.102 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:03.102 traddr: 10.0.0.2 00:12:03.102 eflags: none 00:12:03.102 sectype: none 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:03.102 Perform nvmf subsystem discovery via RPC 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.102 [ 00:12:03.102 { 00:12:03.102 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:03.102 "subtype": "Discovery", 00:12:03.102 "listen_addresses": [ 00:12:03.102 { 00:12:03.102 "trtype": "TCP", 00:12:03.102 "adrfam": "IPv4", 00:12:03.102 "traddr": "10.0.0.2", 00:12:03.102 "trsvcid": "4420" 00:12:03.102 } 00:12:03.102 ], 00:12:03.102 "allow_any_host": true, 00:12:03.102 "hosts": [] 00:12:03.102 }, 00:12:03.102 { 00:12:03.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:03.102 "subtype": "NVMe", 00:12:03.102 "listen_addresses": [ 00:12:03.102 { 00:12:03.102 "trtype": "TCP", 00:12:03.102 "adrfam": "IPv4", 00:12:03.102 "traddr": "10.0.0.2", 00:12:03.102 "trsvcid": "4420" 00:12:03.102 } 00:12:03.102 ], 00:12:03.102 "allow_any_host": true, 00:12:03.102 "hosts": [], 00:12:03.102 "serial_number": "SPDK00000000000001", 00:12:03.102 "model_number": "SPDK bdev Controller", 00:12:03.102 "max_namespaces": 32, 00:12:03.102 "min_cntlid": 1, 00:12:03.102 "max_cntlid": 65519, 00:12:03.102 "namespaces": [ 00:12:03.102 { 00:12:03.102 "nsid": 1, 00:12:03.102 "bdev_name": "Null1", 00:12:03.102 "name": "Null1", 00:12:03.102 "nguid": "0505B7FCA6334951B02350FE7BEB08C9", 00:12:03.102 "uuid": "0505b7fc-a633-4951-b023-50fe7beb08c9" 00:12:03.102 } 00:12:03.102 ] 00:12:03.102 }, 00:12:03.102 { 00:12:03.102 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:03.102 "subtype": "NVMe", 00:12:03.102 "listen_addresses": [ 00:12:03.102 { 00:12:03.102 "trtype": "TCP", 00:12:03.102 "adrfam": "IPv4", 00:12:03.102 "traddr": "10.0.0.2", 00:12:03.102 "trsvcid": "4420" 00:12:03.102 } 00:12:03.102 ], 00:12:03.102 "allow_any_host": true, 00:12:03.102 "hosts": [], 00:12:03.102 "serial_number": "SPDK00000000000002", 00:12:03.102 "model_number": "SPDK bdev Controller", 00:12:03.102 "max_namespaces": 32, 00:12:03.102 "min_cntlid": 1, 00:12:03.102 "max_cntlid": 65519, 00:12:03.102 "namespaces": [ 00:12:03.102 { 00:12:03.102 "nsid": 1, 00:12:03.102 "bdev_name": "Null2", 00:12:03.102 "name": "Null2", 00:12:03.102 "nguid": "DAF5210C17354C2B9A30D327185A9BD2", 00:12:03.102 "uuid": "daf5210c-1735-4c2b-9a30-d327185a9bd2" 00:12:03.102 } 00:12:03.102 ] 00:12:03.102 }, 00:12:03.102 { 00:12:03.102 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:03.102 "subtype": "NVMe", 00:12:03.102 "listen_addresses": [ 00:12:03.102 { 00:12:03.102 "trtype": "TCP", 00:12:03.102 "adrfam": "IPv4", 00:12:03.102 "traddr": "10.0.0.2", 00:12:03.102 "trsvcid": "4420" 00:12:03.102 } 00:12:03.102 ], 00:12:03.102 "allow_any_host": true, 00:12:03.102 "hosts": [], 00:12:03.102 "serial_number": "SPDK00000000000003", 00:12:03.102 "model_number": "SPDK bdev Controller", 00:12:03.102 "max_namespaces": 32, 00:12:03.102 "min_cntlid": 1, 00:12:03.102 "max_cntlid": 65519, 00:12:03.102 "namespaces": [ 00:12:03.102 { 00:12:03.102 "nsid": 1, 00:12:03.102 "bdev_name": "Null3", 00:12:03.102 "name": "Null3", 00:12:03.102 "nguid": "E2245F138F2644F49F397399BC5BD440", 00:12:03.102 "uuid": "e2245f13-8f26-44f4-9f39-7399bc5bd440" 00:12:03.102 } 00:12:03.102 ] 00:12:03.102 }, 00:12:03.102 { 00:12:03.102 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:03.102 "subtype": "NVMe", 00:12:03.102 "listen_addresses": [ 00:12:03.102 { 00:12:03.102 "trtype": "TCP", 00:12:03.102 "adrfam": "IPv4", 00:12:03.102 "traddr": "10.0.0.2", 00:12:03.102 "trsvcid": "4420" 00:12:03.102 } 00:12:03.102 ], 00:12:03.102 "allow_any_host": true, 00:12:03.102 "hosts": [], 00:12:03.102 "serial_number": "SPDK00000000000004", 00:12:03.102 "model_number": "SPDK bdev Controller", 00:12:03.102 "max_namespaces": 32, 00:12:03.102 "min_cntlid": 1, 00:12:03.102 "max_cntlid": 65519, 00:12:03.102 "namespaces": [ 00:12:03.102 { 00:12:03.102 "nsid": 1, 00:12:03.102 "bdev_name": "Null4", 00:12:03.102 "name": "Null4", 00:12:03.102 "nguid": "721A74A3F1444F259C23A3D19CC3D4C7", 00:12:03.102 "uuid": "721a74a3-f144-4f25-9c23-a3d19cc3d4c7" 00:12:03.102 } 00:12:03.102 ] 00:12:03.102 } 00:12:03.102 ] 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.102 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:03.365 rmmod nvme_tcp 00:12:03.365 rmmod nvme_fabrics 00:12:03.365 rmmod nvme_keyring 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1223654 ']' 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1223654 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1223654 ']' 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1223654 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1223654 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1223654' 00:12:03.365 killing process with pid 1223654 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1223654 00:12:03.365 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1223654 00:12:03.625 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:03.625 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:03.625 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:03.625 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:03.625 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:03.625 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:03.625 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:03.625 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:03.625 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:03.625 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.625 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.625 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.532 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:05.532 00:12:05.532 real 0m9.416s 00:12:05.532 user 0m5.771s 00:12:05.532 sys 0m4.847s 00:12:05.533 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.533 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.533 ************************************ 00:12:05.533 END TEST nvmf_target_discovery 00:12:05.533 ************************************ 00:12:05.792 21:05:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:05.792 21:05:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:05.792 21:05:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.792 21:05:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.792 ************************************ 00:12:05.792 START TEST nvmf_referrals 00:12:05.793 ************************************ 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:05.793 * Looking for test storage... 00:12:05.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:05.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.793 --rc genhtml_branch_coverage=1 00:12:05.793 --rc genhtml_function_coverage=1 00:12:05.793 --rc genhtml_legend=1 00:12:05.793 --rc geninfo_all_blocks=1 00:12:05.793 --rc geninfo_unexecuted_blocks=1 00:12:05.793 00:12:05.793 ' 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:05.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.793 --rc genhtml_branch_coverage=1 00:12:05.793 --rc genhtml_function_coverage=1 00:12:05.793 --rc genhtml_legend=1 00:12:05.793 --rc geninfo_all_blocks=1 00:12:05.793 --rc geninfo_unexecuted_blocks=1 00:12:05.793 00:12:05.793 ' 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:05.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.793 --rc genhtml_branch_coverage=1 00:12:05.793 --rc genhtml_function_coverage=1 00:12:05.793 --rc genhtml_legend=1 00:12:05.793 --rc geninfo_all_blocks=1 00:12:05.793 --rc geninfo_unexecuted_blocks=1 00:12:05.793 00:12:05.793 ' 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:05.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.793 --rc genhtml_branch_coverage=1 00:12:05.793 --rc genhtml_function_coverage=1 00:12:05.793 --rc genhtml_legend=1 00:12:05.793 --rc geninfo_all_blocks=1 00:12:05.793 --rc geninfo_unexecuted_blocks=1 00:12:05.793 00:12:05.793 ' 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.793 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.794 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.794 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.794 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:05.794 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:05.794 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:05.794 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:05.794 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:05.794 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:05.794 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:05.794 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:05.794 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.794 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.794 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.794 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.794 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.794 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.794 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.052 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:06.052 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:06.052 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:06.052 21:05:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:12.797 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:12.797 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:12.797 Found net devices under 0000:86:00.0: cvl_0_0 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:12.797 Found net devices under 0000:86:00.1: cvl_0_1 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:12.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:12:12.797 00:12:12.797 --- 10.0.0.2 ping statistics --- 00:12:12.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.797 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:12.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:12:12.797 00:12:12.797 --- 10.0.0.1 ping statistics --- 00:12:12.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.797 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1227246 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1227246 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1227246 ']' 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.797 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.797 [2024-12-05 21:05:19.951732] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:12:12.797 [2024-12-05 21:05:19.951775] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.797 [2024-12-05 21:05:20.030597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.797 [2024-12-05 21:05:20.076833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.797 [2024-12-05 21:05:20.076871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.797 [2024-12-05 21:05:20.076878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.797 [2024-12-05 21:05:20.076884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.797 [2024-12-05 21:05:20.076889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.797 [2024-12-05 21:05:20.078351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.797 [2024-12-05 21:05:20.078456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.797 [2024-12-05 21:05:20.078491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.797 [2024-12-05 21:05:20.078492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.797 [2024-12-05 21:05:20.217747] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.797 [2024-12-05 21:05:20.242519] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:12.797 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:12.798 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:13.054 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:13.054 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:13.054 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:13.054 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:13.054 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:13.054 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:13.054 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:13.054 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:13.054 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:13.054 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:13.054 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:13.310 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:13.310 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:13.310 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:13.310 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:13.310 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:13.310 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:13.567 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:13.824 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:13.824 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:13.824 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:13.824 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:13.825 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:13.825 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:14.082 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:14.082 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:14.082 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.082 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.082 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.082 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:14.082 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:14.082 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.082 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.082 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.082 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:14.082 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:14.082 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:14.082 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:14.082 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:14.082 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:14.082 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:14.340 rmmod nvme_tcp 00:12:14.340 rmmod nvme_fabrics 00:12:14.340 rmmod nvme_keyring 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1227246 ']' 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1227246 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1227246 ']' 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1227246 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1227246 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1227246' 00:12:14.340 killing process with pid 1227246 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1227246 00:12:14.340 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1227246 00:12:14.598 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:14.598 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:14.598 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:14.598 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:14.598 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:14.598 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:14.598 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:14.598 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:14.598 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:14.599 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.599 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.599 21:05:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.502 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:16.502 00:12:16.502 real 0m10.889s 00:12:16.502 user 0m12.269s 00:12:16.502 sys 0m5.231s 00:12:16.502 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.502 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.502 ************************************ 00:12:16.502 END TEST nvmf_referrals 00:12:16.502 ************************************ 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:16.761 ************************************ 00:12:16.761 START TEST nvmf_connect_disconnect 00:12:16.761 ************************************ 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:16.761 * Looking for test storage... 00:12:16.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.761 --rc genhtml_branch_coverage=1 00:12:16.761 --rc genhtml_function_coverage=1 00:12:16.761 --rc genhtml_legend=1 00:12:16.761 --rc geninfo_all_blocks=1 00:12:16.761 --rc geninfo_unexecuted_blocks=1 00:12:16.761 00:12:16.761 ' 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.761 --rc genhtml_branch_coverage=1 00:12:16.761 --rc genhtml_function_coverage=1 00:12:16.761 --rc genhtml_legend=1 00:12:16.761 --rc geninfo_all_blocks=1 00:12:16.761 --rc geninfo_unexecuted_blocks=1 00:12:16.761 00:12:16.761 ' 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.761 --rc genhtml_branch_coverage=1 00:12:16.761 --rc genhtml_function_coverage=1 00:12:16.761 --rc genhtml_legend=1 00:12:16.761 --rc geninfo_all_blocks=1 00:12:16.761 --rc geninfo_unexecuted_blocks=1 00:12:16.761 00:12:16.761 ' 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.761 --rc genhtml_branch_coverage=1 00:12:16.761 --rc genhtml_function_coverage=1 00:12:16.761 --rc genhtml_legend=1 00:12:16.761 --rc geninfo_all_blocks=1 00:12:16.761 --rc geninfo_unexecuted_blocks=1 00:12:16.761 00:12:16.761 ' 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.761 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:16.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:16.762 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:23.331 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:23.332 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:23.332 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:23.332 Found net devices under 0000:86:00.0: cvl_0_0 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:23.332 Found net devices under 0000:86:00.1: cvl_0_1 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:23.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:12:23.332 00:12:23.332 --- 10.0.0.2 ping statistics --- 00:12:23.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.332 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:12:23.332 00:12:23.332 --- 10.0.0.1 ping statistics --- 00:12:23.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.332 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:23.332 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.333 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:23.333 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:23.333 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:23.333 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:23.333 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:23.333 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.333 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1231310 00:12:23.333 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1231310 00:12:23.333 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.333 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1231310 ']' 00:12:23.333 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.333 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.333 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.333 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.333 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.333 [2024-12-05 21:05:30.893209] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:12:23.333 [2024-12-05 21:05:30.893257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.333 [2024-12-05 21:05:30.972894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.333 [2024-12-05 21:05:31.013780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.333 [2024-12-05 21:05:31.013816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.333 [2024-12-05 21:05:31.013823] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.333 [2024-12-05 21:05:31.013828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.333 [2024-12-05 21:05:31.013833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.333 [2024-12-05 21:05:31.015276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.333 [2024-12-05 21:05:31.015414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.333 [2024-12-05 21:05:31.015502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.333 [2024-12-05 21:05:31.015502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.900 [2024-12-05 21:05:31.771677] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.900 [2024-12-05 21:05:31.839560] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:23.900 21:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:27.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.303 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:40.303 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:40.303 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:40.303 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:40.304 rmmod nvme_tcp 00:12:40.304 rmmod nvme_fabrics 00:12:40.304 rmmod nvme_keyring 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1231310 ']' 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1231310 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1231310 ']' 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1231310 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1231310 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1231310' 00:12:40.304 killing process with pid 1231310 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1231310 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1231310 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.304 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:42.838 00:12:42.838 real 0m25.796s 00:12:42.838 user 1m10.748s 00:12:42.838 sys 0m5.874s 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.838 ************************************ 00:12:42.838 END TEST nvmf_connect_disconnect 00:12:42.838 ************************************ 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:42.838 ************************************ 00:12:42.838 START TEST nvmf_multitarget 00:12:42.838 ************************************ 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:42.838 * Looking for test storage... 00:12:42.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:42.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.838 --rc genhtml_branch_coverage=1 00:12:42.838 --rc genhtml_function_coverage=1 00:12:42.838 --rc genhtml_legend=1 00:12:42.838 --rc geninfo_all_blocks=1 00:12:42.838 --rc geninfo_unexecuted_blocks=1 00:12:42.838 00:12:42.838 ' 00:12:42.838 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:42.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.838 --rc genhtml_branch_coverage=1 00:12:42.838 --rc genhtml_function_coverage=1 00:12:42.838 --rc genhtml_legend=1 00:12:42.838 --rc geninfo_all_blocks=1 00:12:42.839 --rc geninfo_unexecuted_blocks=1 00:12:42.839 00:12:42.839 ' 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:42.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.839 --rc genhtml_branch_coverage=1 00:12:42.839 --rc genhtml_function_coverage=1 00:12:42.839 --rc genhtml_legend=1 00:12:42.839 --rc geninfo_all_blocks=1 00:12:42.839 --rc geninfo_unexecuted_blocks=1 00:12:42.839 00:12:42.839 ' 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:42.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.839 --rc genhtml_branch_coverage=1 00:12:42.839 --rc genhtml_function_coverage=1 00:12:42.839 --rc genhtml_legend=1 00:12:42.839 --rc geninfo_all_blocks=1 00:12:42.839 --rc geninfo_unexecuted_blocks=1 00:12:42.839 00:12:42.839 ' 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:42.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:42.839 21:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:49.412 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:49.412 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:49.412 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:49.413 Found net devices under 0000:86:00.0: cvl_0_0 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:49.413 Found net devices under 0000:86:00.1: cvl_0_1 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:49.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:12:49.413 00:12:49.413 --- 10.0.0.2 ping statistics --- 00:12:49.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.413 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:49.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:12:49.413 00:12:49.413 --- 10.0.0.1 ping statistics --- 00:12:49.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.413 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1237758 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1237758 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1237758 ']' 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:49.413 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:49.413 [2024-12-05 21:05:56.736985] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:12:49.414 [2024-12-05 21:05:56.737037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.414 [2024-12-05 21:05:56.817168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.414 [2024-12-05 21:05:56.864124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.414 [2024-12-05 21:05:56.864157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.414 [2024-12-05 21:05:56.864168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.414 [2024-12-05 21:05:56.864176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.414 [2024-12-05 21:05:56.864181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.414 [2024-12-05 21:05:56.865626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.414 [2024-12-05 21:05:56.865758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.414 [2024-12-05 21:05:56.865880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.414 [2024-12-05 21:05:56.865880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.674 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.674 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:49.674 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:49.674 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:49.674 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:49.674 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.674 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:49.674 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:49.674 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:49.674 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:49.674 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:49.933 "nvmf_tgt_1" 00:12:49.933 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:49.933 "nvmf_tgt_2" 00:12:49.933 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:49.933 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:49.933 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:49.933 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:50.192 true 00:12:50.192 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:50.192 true 00:12:50.192 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:50.192 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:50.450 rmmod nvme_tcp 00:12:50.450 rmmod nvme_fabrics 00:12:50.450 rmmod nvme_keyring 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1237758 ']' 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1237758 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1237758 ']' 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1237758 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1237758 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1237758' 00:12:50.450 killing process with pid 1237758 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1237758 00:12:50.450 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1237758 00:12:50.709 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:50.709 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:50.709 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:50.709 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:50.709 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:50.709 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:50.709 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:50.709 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:50.709 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:50.709 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.709 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.709 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.613 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:52.872 00:12:52.872 real 0m10.204s 00:12:52.872 user 0m9.756s 00:12:52.872 sys 0m4.938s 00:12:52.872 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.872 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.872 ************************************ 00:12:52.872 END TEST nvmf_multitarget 00:12:52.872 ************************************ 00:12:52.872 21:06:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:52.872 21:06:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:52.872 21:06:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.872 21:06:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:52.872 ************************************ 00:12:52.872 START TEST nvmf_rpc 00:12:52.872 ************************************ 00:12:52.872 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:52.872 * Looking for test storage... 00:12:52.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:52.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.873 --rc genhtml_branch_coverage=1 00:12:52.873 --rc genhtml_function_coverage=1 00:12:52.873 --rc genhtml_legend=1 00:12:52.873 --rc geninfo_all_blocks=1 00:12:52.873 --rc geninfo_unexecuted_blocks=1 00:12:52.873 00:12:52.873 ' 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:52.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.873 --rc genhtml_branch_coverage=1 00:12:52.873 --rc genhtml_function_coverage=1 00:12:52.873 --rc genhtml_legend=1 00:12:52.873 --rc geninfo_all_blocks=1 00:12:52.873 --rc geninfo_unexecuted_blocks=1 00:12:52.873 00:12:52.873 ' 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:52.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.873 --rc genhtml_branch_coverage=1 00:12:52.873 --rc genhtml_function_coverage=1 00:12:52.873 --rc genhtml_legend=1 00:12:52.873 --rc geninfo_all_blocks=1 00:12:52.873 --rc geninfo_unexecuted_blocks=1 00:12:52.873 00:12:52.873 ' 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:52.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.873 --rc genhtml_branch_coverage=1 00:12:52.873 --rc genhtml_function_coverage=1 00:12:52.873 --rc genhtml_legend=1 00:12:52.873 --rc geninfo_all_blocks=1 00:12:52.873 --rc geninfo_unexecuted_blocks=1 00:12:52.873 00:12:52.873 ' 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.873 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:53.132 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:53.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:53.132 21:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:59.697 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:59.697 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.697 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:59.698 Found net devices under 0000:86:00.0: cvl_0_0 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:59.698 Found net devices under 0000:86:00.1: cvl_0_1 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:59.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:12:59.698 00:12:59.698 --- 10.0.0.2 ping statistics --- 00:12:59.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.698 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:59.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:12:59.698 00:12:59.698 --- 10.0.0.1 ping statistics --- 00:12:59.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.698 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:59.698 21:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:59.698 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:59.698 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:59.698 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:59.698 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.698 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1241835 00:12:59.698 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1241835 00:12:59.698 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:59.698 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1241835 ']' 00:12:59.698 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.698 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.698 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.698 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.698 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.698 [2024-12-05 21:06:07.065530] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:12:59.698 [2024-12-05 21:06:07.065580] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.698 [2024-12-05 21:06:07.146722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.698 [2024-12-05 21:06:07.189165] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.698 [2024-12-05 21:06:07.189202] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.698 [2024-12-05 21:06:07.189209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.698 [2024-12-05 21:06:07.189216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.698 [2024-12-05 21:06:07.189221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.698 [2024-12-05 21:06:07.190746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.698 [2024-12-05 21:06:07.190855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.698 [2024-12-05 21:06:07.190962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.698 [2024-12-05 21:06:07.190963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.956 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.956 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:59.956 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:59.956 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:59.956 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.956 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.956 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:59.956 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.956 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.957 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.957 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:59.957 "tick_rate": 2100000000, 00:12:59.957 "poll_groups": [ 00:12:59.957 { 00:12:59.957 "name": "nvmf_tgt_poll_group_000", 00:12:59.957 "admin_qpairs": 0, 00:12:59.957 "io_qpairs": 0, 00:12:59.957 "current_admin_qpairs": 0, 00:12:59.957 "current_io_qpairs": 0, 00:12:59.957 "pending_bdev_io": 0, 00:12:59.957 "completed_nvme_io": 0, 00:12:59.957 "transports": [] 00:12:59.957 }, 00:12:59.957 { 00:12:59.957 "name": "nvmf_tgt_poll_group_001", 00:12:59.957 "admin_qpairs": 0, 00:12:59.957 "io_qpairs": 0, 00:12:59.957 "current_admin_qpairs": 0, 00:12:59.957 "current_io_qpairs": 0, 00:12:59.957 "pending_bdev_io": 0, 00:12:59.957 "completed_nvme_io": 0, 00:12:59.957 "transports": [] 00:12:59.957 }, 00:12:59.957 { 00:12:59.957 "name": "nvmf_tgt_poll_group_002", 00:12:59.957 "admin_qpairs": 0, 00:12:59.957 "io_qpairs": 0, 00:12:59.957 "current_admin_qpairs": 0, 00:12:59.957 "current_io_qpairs": 0, 00:12:59.957 "pending_bdev_io": 0, 00:12:59.957 "completed_nvme_io": 0, 00:12:59.957 "transports": [] 00:12:59.957 }, 00:12:59.957 { 00:12:59.957 "name": "nvmf_tgt_poll_group_003", 00:12:59.957 "admin_qpairs": 0, 00:12:59.957 "io_qpairs": 0, 00:12:59.957 "current_admin_qpairs": 0, 00:12:59.957 "current_io_qpairs": 0, 00:12:59.957 "pending_bdev_io": 0, 00:12:59.957 "completed_nvme_io": 0, 00:12:59.957 "transports": [] 00:12:59.957 } 00:12:59.957 ] 00:12:59.957 }' 00:12:59.957 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:59.957 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:59.957 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:59.957 21:06:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:59.957 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:59.957 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:59.957 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:59.957 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:59.957 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.957 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.957 [2024-12-05 21:06:08.055885] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.957 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.216 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:00.216 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.216 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.216 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.216 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:00.216 "tick_rate": 2100000000, 00:13:00.216 "poll_groups": [ 00:13:00.216 { 00:13:00.216 "name": "nvmf_tgt_poll_group_000", 00:13:00.216 "admin_qpairs": 0, 00:13:00.216 "io_qpairs": 0, 00:13:00.216 "current_admin_qpairs": 0, 00:13:00.216 "current_io_qpairs": 0, 00:13:00.216 "pending_bdev_io": 0, 00:13:00.216 "completed_nvme_io": 0, 00:13:00.216 "transports": [ 00:13:00.216 { 00:13:00.216 "trtype": "TCP" 00:13:00.216 } 00:13:00.216 ] 00:13:00.216 }, 00:13:00.216 { 00:13:00.216 "name": "nvmf_tgt_poll_group_001", 00:13:00.216 "admin_qpairs": 0, 00:13:00.216 "io_qpairs": 0, 00:13:00.216 "current_admin_qpairs": 0, 00:13:00.216 "current_io_qpairs": 0, 00:13:00.216 "pending_bdev_io": 0, 00:13:00.216 "completed_nvme_io": 0, 00:13:00.216 "transports": [ 00:13:00.216 { 00:13:00.216 "trtype": "TCP" 00:13:00.216 } 00:13:00.216 ] 00:13:00.216 }, 00:13:00.216 { 00:13:00.216 "name": "nvmf_tgt_poll_group_002", 00:13:00.216 "admin_qpairs": 0, 00:13:00.216 "io_qpairs": 0, 00:13:00.216 "current_admin_qpairs": 0, 00:13:00.216 "current_io_qpairs": 0, 00:13:00.216 "pending_bdev_io": 0, 00:13:00.216 "completed_nvme_io": 0, 00:13:00.216 "transports": [ 00:13:00.216 { 00:13:00.216 "trtype": "TCP" 00:13:00.216 } 00:13:00.216 ] 00:13:00.216 }, 00:13:00.216 { 00:13:00.216 "name": "nvmf_tgt_poll_group_003", 00:13:00.216 "admin_qpairs": 0, 00:13:00.216 "io_qpairs": 0, 00:13:00.216 "current_admin_qpairs": 0, 00:13:00.216 "current_io_qpairs": 0, 00:13:00.216 "pending_bdev_io": 0, 00:13:00.216 "completed_nvme_io": 0, 00:13:00.216 "transports": [ 00:13:00.216 { 00:13:00.216 "trtype": "TCP" 00:13:00.216 } 00:13:00.216 ] 00:13:00.216 } 00:13:00.216 ] 00:13:00.216 }' 00:13:00.216 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:00.216 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:00.216 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:00.216 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.216 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:00.216 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:00.216 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:00.216 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:00.216 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.216 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:00.216 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:00.216 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.217 Malloc1 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.217 [2024-12-05 21:06:08.245888] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:00.217 [2024-12-05 21:06:08.274416] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:13:00.217 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:00.217 could not add new controller: failed to write to nvme-fabrics device 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.217 21:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.592 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.592 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:01.592 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.592 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:01.592 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:03.493 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:03.493 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:03.493 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.493 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:03.493 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.493 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:03.494 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.494 [2024-12-05 21:06:11.580729] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:13:03.751 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:03.751 could not add new controller: failed to write to nvme-fabrics device 00:13:03.751 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:03.751 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:03.751 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:03.751 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:03.751 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:03.751 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.751 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.751 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.751 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.125 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:05.125 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:05.125 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.125 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:05.125 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.018 [2024-12-05 21:06:14.950727] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.018 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.392 21:06:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.393 21:06:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:08.393 21:06:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.393 21:06:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:08.393 21:06:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.293 [2024-12-05 21:06:18.248205] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.293 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.294 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.294 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.294 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.294 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.294 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.294 21:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.667 21:06:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.667 21:06:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:11.667 21:06:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.667 21:06:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:11.667 21:06:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:13.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.708 [2024-12-05 21:06:21.520143] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.708 21:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.640 21:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:14.640 21:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:14.640 21:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.640 21:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:14.640 21:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:17.171 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:17.171 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:17.171 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.171 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:17.171 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.172 [2024-12-05 21:06:24.873978] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.172 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.106 21:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:18.106 21:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:18.106 21:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.106 21:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:18.106 21:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:20.002 21:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:20.002 21:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:20.002 21:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:20.002 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:20.002 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:20.002 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:20.002 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.260 [2024-12-05 21:06:28.181511] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.260 21:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.193 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.193 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:21.193 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.193 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:21.193 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 [2024-12-05 21:06:31.504297] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 [2024-12-05 21:06:31.552322] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 [2024-12-05 21:06:31.600468] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.719 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.720 [2024-12-05 21:06:31.648647] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.720 [2024-12-05 21:06:31.696814] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:23.720 "tick_rate": 2100000000, 00:13:23.720 "poll_groups": [ 00:13:23.720 { 00:13:23.720 "name": "nvmf_tgt_poll_group_000", 00:13:23.720 "admin_qpairs": 2, 00:13:23.720 "io_qpairs": 168, 00:13:23.720 "current_admin_qpairs": 0, 00:13:23.720 "current_io_qpairs": 0, 00:13:23.720 "pending_bdev_io": 0, 00:13:23.720 "completed_nvme_io": 268, 00:13:23.720 "transports": [ 00:13:23.720 { 00:13:23.720 "trtype": "TCP" 00:13:23.720 } 00:13:23.720 ] 00:13:23.720 }, 00:13:23.720 { 00:13:23.720 "name": "nvmf_tgt_poll_group_001", 00:13:23.720 "admin_qpairs": 2, 00:13:23.720 "io_qpairs": 168, 00:13:23.720 "current_admin_qpairs": 0, 00:13:23.720 "current_io_qpairs": 0, 00:13:23.720 "pending_bdev_io": 0, 00:13:23.720 "completed_nvme_io": 219, 00:13:23.720 "transports": [ 00:13:23.720 { 00:13:23.720 "trtype": "TCP" 00:13:23.720 } 00:13:23.720 ] 00:13:23.720 }, 00:13:23.720 { 00:13:23.720 "name": "nvmf_tgt_poll_group_002", 00:13:23.720 "admin_qpairs": 1, 00:13:23.720 "io_qpairs": 168, 00:13:23.720 "current_admin_qpairs": 0, 00:13:23.720 "current_io_qpairs": 0, 00:13:23.720 "pending_bdev_io": 0, 00:13:23.720 "completed_nvme_io": 268, 00:13:23.720 "transports": [ 00:13:23.720 { 00:13:23.720 "trtype": "TCP" 00:13:23.720 } 00:13:23.720 ] 00:13:23.720 }, 00:13:23.720 { 00:13:23.720 "name": "nvmf_tgt_poll_group_003", 00:13:23.720 "admin_qpairs": 2, 00:13:23.720 "io_qpairs": 168, 00:13:23.720 "current_admin_qpairs": 0, 00:13:23.720 "current_io_qpairs": 0, 00:13:23.720 "pending_bdev_io": 0, 00:13:23.720 "completed_nvme_io": 267, 00:13:23.720 "transports": [ 00:13:23.720 { 00:13:23.720 "trtype": "TCP" 00:13:23.720 } 00:13:23.720 ] 00:13:23.720 } 00:13:23.720 ] 00:13:23.720 }' 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:23.720 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:23.979 rmmod nvme_tcp 00:13:23.979 rmmod nvme_fabrics 00:13:23.979 rmmod nvme_keyring 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1241835 ']' 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1241835 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1241835 ']' 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1241835 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1241835 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1241835' 00:13:23.979 killing process with pid 1241835 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1241835 00:13:23.979 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1241835 00:13:24.238 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:24.238 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:24.238 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:24.238 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:24.238 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:24.239 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:24.239 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:24.239 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:24.239 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:24.239 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.239 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.239 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.141 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:26.141 00:13:26.141 real 0m33.431s 00:13:26.141 user 1m41.312s 00:13:26.141 sys 0m6.555s 00:13:26.141 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.141 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.141 ************************************ 00:13:26.141 END TEST nvmf_rpc 00:13:26.141 ************************************ 00:13:26.400 21:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:26.400 21:06:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:26.400 21:06:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.400 21:06:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:26.400 ************************************ 00:13:26.400 START TEST nvmf_invalid 00:13:26.400 ************************************ 00:13:26.400 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:26.400 * Looking for test storage... 00:13:26.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:26.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.401 --rc genhtml_branch_coverage=1 00:13:26.401 --rc genhtml_function_coverage=1 00:13:26.401 --rc genhtml_legend=1 00:13:26.401 --rc geninfo_all_blocks=1 00:13:26.401 --rc geninfo_unexecuted_blocks=1 00:13:26.401 00:13:26.401 ' 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:26.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.401 --rc genhtml_branch_coverage=1 00:13:26.401 --rc genhtml_function_coverage=1 00:13:26.401 --rc genhtml_legend=1 00:13:26.401 --rc geninfo_all_blocks=1 00:13:26.401 --rc geninfo_unexecuted_blocks=1 00:13:26.401 00:13:26.401 ' 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:26.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.401 --rc genhtml_branch_coverage=1 00:13:26.401 --rc genhtml_function_coverage=1 00:13:26.401 --rc genhtml_legend=1 00:13:26.401 --rc geninfo_all_blocks=1 00:13:26.401 --rc geninfo_unexecuted_blocks=1 00:13:26.401 00:13:26.401 ' 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:26.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.401 --rc genhtml_branch_coverage=1 00:13:26.401 --rc genhtml_function_coverage=1 00:13:26.401 --rc genhtml_legend=1 00:13:26.401 --rc geninfo_all_blocks=1 00:13:26.401 --rc geninfo_unexecuted_blocks=1 00:13:26.401 00:13:26.401 ' 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:26.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:26.401 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:26.661 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:26.661 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:26.661 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.661 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:26.661 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:26.661 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:26.661 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:26.661 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:26.661 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.661 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:26.661 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:26.661 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:26.661 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.661 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.661 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.661 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:26.661 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:26.662 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:26.662 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:33.232 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:33.232 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:33.232 Found net devices under 0000:86:00.0: cvl_0_0 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:33.232 Found net devices under 0000:86:00.1: cvl_0_1 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:33.232 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:33.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:13:33.232 00:13:33.233 --- 10.0.0.2 ping statistics --- 00:13:33.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.233 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:33.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:13:33.233 00:13:33.233 --- 10.0.0.1 ping statistics --- 00:13:33.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.233 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1249904 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1249904 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1249904 ']' 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.233 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:33.233 [2024-12-05 21:06:40.550028] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:13:33.233 [2024-12-05 21:06:40.550084] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.233 [2024-12-05 21:06:40.630019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.233 [2024-12-05 21:06:40.673065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.233 [2024-12-05 21:06:40.673103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.233 [2024-12-05 21:06:40.673110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.233 [2024-12-05 21:06:40.673116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.233 [2024-12-05 21:06:40.673121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.233 [2024-12-05 21:06:40.674764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.233 [2024-12-05 21:06:40.674875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.233 [2024-12-05 21:06:40.674980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.233 [2024-12-05 21:06:40.674980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.495 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.495 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:33.495 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:33.495 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:33.495 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:33.495 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.495 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:33.495 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15058 00:13:33.495 [2024-12-05 21:06:41.584234] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:33.752 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:33.752 { 00:13:33.752 "nqn": "nqn.2016-06.io.spdk:cnode15058", 00:13:33.752 "tgt_name": "foobar", 00:13:33.752 "method": "nvmf_create_subsystem", 00:13:33.752 "req_id": 1 00:13:33.752 } 00:13:33.752 Got JSON-RPC error response 00:13:33.752 response: 00:13:33.752 { 00:13:33.752 "code": -32603, 00:13:33.752 "message": "Unable to find target foobar" 00:13:33.752 }' 00:13:33.752 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:33.752 { 00:13:33.752 "nqn": "nqn.2016-06.io.spdk:cnode15058", 00:13:33.752 "tgt_name": "foobar", 00:13:33.752 "method": "nvmf_create_subsystem", 00:13:33.752 "req_id": 1 00:13:33.752 } 00:13:33.752 Got JSON-RPC error response 00:13:33.752 response: 00:13:33.752 { 00:13:33.752 "code": -32603, 00:13:33.752 "message": "Unable to find target foobar" 00:13:33.752 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:33.752 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:33.753 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode4480 00:13:33.753 [2024-12-05 21:06:41.784971] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4480: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:33.753 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:33.753 { 00:13:33.753 "nqn": "nqn.2016-06.io.spdk:cnode4480", 00:13:33.753 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:33.753 "method": "nvmf_create_subsystem", 00:13:33.753 "req_id": 1 00:13:33.753 } 00:13:33.753 Got JSON-RPC error response 00:13:33.753 response: 00:13:33.753 { 00:13:33.753 "code": -32602, 00:13:33.753 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:33.753 }' 00:13:33.753 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:33.753 { 00:13:33.753 "nqn": "nqn.2016-06.io.spdk:cnode4480", 00:13:33.753 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:33.753 "method": "nvmf_create_subsystem", 00:13:33.753 "req_id": 1 00:13:33.753 } 00:13:33.753 Got JSON-RPC error response 00:13:33.753 response: 00:13:33.753 { 00:13:33.753 "code": -32602, 00:13:33.753 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:33.753 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:33.753 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:33.753 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17656 00:13:34.011 [2024-12-05 21:06:42.001647] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17656: invalid model number 'SPDK_Controller' 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:34.011 { 00:13:34.011 "nqn": "nqn.2016-06.io.spdk:cnode17656", 00:13:34.011 "model_number": "SPDK_Controller\u001f", 00:13:34.011 "method": "nvmf_create_subsystem", 00:13:34.011 "req_id": 1 00:13:34.011 } 00:13:34.011 Got JSON-RPC error response 00:13:34.011 response: 00:13:34.011 { 00:13:34.011 "code": -32602, 00:13:34.011 "message": "Invalid MN SPDK_Controller\u001f" 00:13:34.011 }' 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:34.011 { 00:13:34.011 "nqn": "nqn.2016-06.io.spdk:cnode17656", 00:13:34.011 "model_number": "SPDK_Controller\u001f", 00:13:34.011 "method": "nvmf_create_subsystem", 00:13:34.011 "req_id": 1 00:13:34.011 } 00:13:34.011 Got JSON-RPC error response 00:13:34.011 response: 00:13:34.011 { 00:13:34.011 "code": -32602, 00:13:34.011 "message": "Invalid MN SPDK_Controller\u001f" 00:13:34.011 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.011 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:34.269 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:13:34.270 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'n!HDYU@ /dev/null' 00:13:36.854 21:06:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.383 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:39.383 00:13:39.383 real 0m12.587s 00:13:39.383 user 0m20.884s 00:13:39.383 sys 0m5.428s 00:13:39.383 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.383 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:39.383 ************************************ 00:13:39.383 END TEST nvmf_invalid 00:13:39.383 ************************************ 00:13:39.383 21:06:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:39.383 21:06:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:39.383 21:06:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.383 21:06:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:39.383 ************************************ 00:13:39.383 START TEST nvmf_connect_stress 00:13:39.383 ************************************ 00:13:39.383 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:39.383 * Looking for test storage... 00:13:39.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:39.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.383 --rc genhtml_branch_coverage=1 00:13:39.383 --rc genhtml_function_coverage=1 00:13:39.383 --rc genhtml_legend=1 00:13:39.383 --rc geninfo_all_blocks=1 00:13:39.383 --rc geninfo_unexecuted_blocks=1 00:13:39.383 00:13:39.383 ' 00:13:39.383 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:39.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.383 --rc genhtml_branch_coverage=1 00:13:39.383 --rc genhtml_function_coverage=1 00:13:39.384 --rc genhtml_legend=1 00:13:39.384 --rc geninfo_all_blocks=1 00:13:39.384 --rc geninfo_unexecuted_blocks=1 00:13:39.384 00:13:39.384 ' 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:39.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.384 --rc genhtml_branch_coverage=1 00:13:39.384 --rc genhtml_function_coverage=1 00:13:39.384 --rc genhtml_legend=1 00:13:39.384 --rc geninfo_all_blocks=1 00:13:39.384 --rc geninfo_unexecuted_blocks=1 00:13:39.384 00:13:39.384 ' 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:39.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.384 --rc genhtml_branch_coverage=1 00:13:39.384 --rc genhtml_function_coverage=1 00:13:39.384 --rc genhtml_legend=1 00:13:39.384 --rc geninfo_all_blocks=1 00:13:39.384 --rc geninfo_unexecuted_blocks=1 00:13:39.384 00:13:39.384 ' 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:39.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:39.384 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.950 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:45.951 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:45.951 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:45.951 Found net devices under 0000:86:00.0: cvl_0_0 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:45.951 Found net devices under 0000:86:00.1: cvl_0_1 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:45.951 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:45.951 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:45.951 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:45.951 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:45.951 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:45.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:13:45.951 00:13:45.951 --- 10.0.0.2 ping statistics --- 00:13:45.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.951 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:13:45.951 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:45.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:13:45.951 00:13:45.951 --- 10.0.0.1 ping statistics --- 00:13:45.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.952 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1254268 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1254268 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1254268 ']' 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.952 [2024-12-05 21:06:53.149348] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:13:45.952 [2024-12-05 21:06:53.149411] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.952 [2024-12-05 21:06:53.229359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:45.952 [2024-12-05 21:06:53.270839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.952 [2024-12-05 21:06:53.270876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.952 [2024-12-05 21:06:53.270883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.952 [2024-12-05 21:06:53.270889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.952 [2024-12-05 21:06:53.270894] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.952 [2024-12-05 21:06:53.272218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.952 [2024-12-05 21:06:53.272323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.952 [2024-12-05 21:06:53.272324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.952 [2024-12-05 21:06:53.408612] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.952 [2024-12-05 21:06:53.428808] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.952 NULL1 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1254482 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.952 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.953 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.211 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.211 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:46.211 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.211 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.211 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.469 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.469 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:46.469 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.469 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.469 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.727 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.727 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:46.727 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.727 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.727 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.291 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.291 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:47.291 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.291 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.291 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.549 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.549 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:47.549 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.549 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.549 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.806 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.806 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:47.806 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.806 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.806 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.063 21:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.063 21:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:48.063 21:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.063 21:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.063 21:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.673 21:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.673 21:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:48.673 21:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.673 21:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.673 21:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.008 21:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.008 21:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:49.008 21:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.008 21:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.008 21:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.008 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.008 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:49.008 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.008 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.008 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.572 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.572 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:49.572 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.572 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.572 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.829 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.829 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:49.829 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.829 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.829 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.086 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.086 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:50.086 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.086 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.086 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.342 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.342 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:50.342 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.342 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.342 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.906 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.907 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:50.907 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.907 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.907 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.164 21:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.164 21:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:51.164 21:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.164 21:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.164 21:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.422 21:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.422 21:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:51.422 21:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.422 21:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.422 21:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.679 21:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.679 21:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:51.679 21:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.679 21:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.679 21:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.936 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.936 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:51.936 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.936 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.936 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.501 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.501 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:52.501 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.501 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.501 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.758 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.758 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:52.758 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.758 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.759 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.016 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.016 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:53.016 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.016 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.016 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.274 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.274 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:53.274 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.274 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.274 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.841 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.841 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:53.841 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.841 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.841 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.100 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.100 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:54.100 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.100 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.100 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.359 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.359 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:54.359 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.359 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.359 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.617 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.617 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:54.617 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.617 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.617 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.876 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.876 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:54.876 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.876 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.876 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.442 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.442 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:55.442 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.442 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.442 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.442 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1254482 00:13:55.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1254482) - No such process 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1254482 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:55.701 rmmod nvme_tcp 00:13:55.701 rmmod nvme_fabrics 00:13:55.701 rmmod nvme_keyring 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1254268 ']' 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1254268 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1254268 ']' 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1254268 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1254268 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1254268' 00:13:55.701 killing process with pid 1254268 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1254268 00:13:55.701 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1254268 00:13:55.961 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:55.961 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:55.961 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:55.961 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:55.961 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:55.961 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:55.961 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:55.961 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:55.961 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:55.961 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.961 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:55.961 21:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.864 21:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:57.864 00:13:57.864 real 0m18.999s 00:13:57.864 user 0m39.153s 00:13:57.864 sys 0m8.620s 00:13:57.864 21:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.864 21:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.864 ************************************ 00:13:57.864 END TEST nvmf_connect_stress 00:13:57.864 ************************************ 00:13:58.123 21:07:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:58.123 21:07:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:58.123 21:07:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.123 21:07:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:58.123 ************************************ 00:13:58.123 START TEST nvmf_fused_ordering 00:13:58.123 ************************************ 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:58.123 * Looking for test storage... 00:13:58.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:58.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.123 --rc genhtml_branch_coverage=1 00:13:58.123 --rc genhtml_function_coverage=1 00:13:58.123 --rc genhtml_legend=1 00:13:58.123 --rc geninfo_all_blocks=1 00:13:58.123 --rc geninfo_unexecuted_blocks=1 00:13:58.123 00:13:58.123 ' 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:58.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.123 --rc genhtml_branch_coverage=1 00:13:58.123 --rc genhtml_function_coverage=1 00:13:58.123 --rc genhtml_legend=1 00:13:58.123 --rc geninfo_all_blocks=1 00:13:58.123 --rc geninfo_unexecuted_blocks=1 00:13:58.123 00:13:58.123 ' 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:58.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.123 --rc genhtml_branch_coverage=1 00:13:58.123 --rc genhtml_function_coverage=1 00:13:58.123 --rc genhtml_legend=1 00:13:58.123 --rc geninfo_all_blocks=1 00:13:58.123 --rc geninfo_unexecuted_blocks=1 00:13:58.123 00:13:58.123 ' 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:58.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.123 --rc genhtml_branch_coverage=1 00:13:58.123 --rc genhtml_function_coverage=1 00:13:58.123 --rc genhtml_legend=1 00:13:58.123 --rc geninfo_all_blocks=1 00:13:58.123 --rc geninfo_unexecuted_blocks=1 00:13:58.123 00:13:58.123 ' 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.123 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.382 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.382 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.382 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:58.382 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.382 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:58.382 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:58.382 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:58.382 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.382 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.382 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.382 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:58.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:58.382 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:58.382 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:58.382 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:58.382 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:58.382 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:58.383 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.383 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:58.383 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:58.383 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:58.383 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.383 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.383 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.383 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:58.383 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:58.383 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:58.383 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:04.948 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:04.948 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:04.948 Found net devices under 0000:86:00.0: cvl_0_0 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:04.948 Found net devices under 0000:86:00.1: cvl_0_1 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:04.948 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:04.949 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:04.949 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:04.949 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:04.949 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.949 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.949 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.949 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:04.949 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.949 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.949 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:04.949 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:04.949 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.949 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.949 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:04.949 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:04.949 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.949 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:04.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:14:04.949 00:14:04.949 --- 10.0.0.2 ping statistics --- 00:14:04.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.949 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:14:04.949 00:14:04.949 --- 10.0.0.1 ping statistics --- 00:14:04.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.949 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1259641 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1259641 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1259641 ']' 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.949 [2024-12-05 21:07:12.276372] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:14:04.949 [2024-12-05 21:07:12.276416] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.949 [2024-12-05 21:07:12.354334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.949 [2024-12-05 21:07:12.394869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.949 [2024-12-05 21:07:12.394903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.949 [2024-12-05 21:07:12.394910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.949 [2024-12-05 21:07:12.394916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.949 [2024-12-05 21:07:12.394921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.949 [2024-12-05 21:07:12.395494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.949 [2024-12-05 21:07:12.527162] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.949 [2024-12-05 21:07:12.547341] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.949 NULL1 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.949 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:04.949 [2024-12-05 21:07:12.605246] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:14:04.950 [2024-12-05 21:07:12.605275] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259664 ] 00:14:05.208 Attached to nqn.2016-06.io.spdk:cnode1 00:14:05.208 Namespace ID: 1 size: 1GB 00:14:05.208 fused_ordering(0) 00:14:05.208 fused_ordering(1) 00:14:05.208 fused_ordering(2) 00:14:05.208 fused_ordering(3) 00:14:05.208 fused_ordering(4) 00:14:05.208 fused_ordering(5) 00:14:05.208 fused_ordering(6) 00:14:05.208 fused_ordering(7) 00:14:05.208 fused_ordering(8) 00:14:05.208 fused_ordering(9) 00:14:05.208 fused_ordering(10) 00:14:05.208 fused_ordering(11) 00:14:05.208 fused_ordering(12) 00:14:05.208 fused_ordering(13) 00:14:05.208 fused_ordering(14) 00:14:05.208 fused_ordering(15) 00:14:05.208 fused_ordering(16) 00:14:05.208 fused_ordering(17) 00:14:05.208 fused_ordering(18) 00:14:05.208 fused_ordering(19) 00:14:05.208 fused_ordering(20) 00:14:05.208 fused_ordering(21) 00:14:05.208 fused_ordering(22) 00:14:05.208 fused_ordering(23) 00:14:05.208 fused_ordering(24) 00:14:05.208 fused_ordering(25) 00:14:05.208 fused_ordering(26) 00:14:05.208 fused_ordering(27) 00:14:05.208 fused_ordering(28) 00:14:05.208 fused_ordering(29) 00:14:05.208 fused_ordering(30) 00:14:05.208 fused_ordering(31) 00:14:05.208 fused_ordering(32) 00:14:05.208 fused_ordering(33) 00:14:05.208 fused_ordering(34) 00:14:05.208 fused_ordering(35) 00:14:05.208 fused_ordering(36) 00:14:05.208 fused_ordering(37) 00:14:05.208 fused_ordering(38) 00:14:05.208 fused_ordering(39) 00:14:05.208 fused_ordering(40) 00:14:05.208 fused_ordering(41) 00:14:05.208 fused_ordering(42) 00:14:05.208 fused_ordering(43) 00:14:05.209 fused_ordering(44) 00:14:05.209 fused_ordering(45) 00:14:05.209 fused_ordering(46) 00:14:05.209 fused_ordering(47) 00:14:05.209 fused_ordering(48) 00:14:05.209 fused_ordering(49) 00:14:05.209 fused_ordering(50) 00:14:05.209 fused_ordering(51) 00:14:05.209 fused_ordering(52) 00:14:05.209 fused_ordering(53) 00:14:05.209 fused_ordering(54) 00:14:05.209 fused_ordering(55) 00:14:05.209 fused_ordering(56) 00:14:05.209 fused_ordering(57) 00:14:05.209 fused_ordering(58) 00:14:05.209 fused_ordering(59) 00:14:05.209 fused_ordering(60) 00:14:05.209 fused_ordering(61) 00:14:05.209 fused_ordering(62) 00:14:05.209 fused_ordering(63) 00:14:05.209 fused_ordering(64) 00:14:05.209 fused_ordering(65) 00:14:05.209 fused_ordering(66) 00:14:05.209 fused_ordering(67) 00:14:05.209 fused_ordering(68) 00:14:05.209 fused_ordering(69) 00:14:05.209 fused_ordering(70) 00:14:05.209 fused_ordering(71) 00:14:05.209 fused_ordering(72) 00:14:05.209 fused_ordering(73) 00:14:05.209 fused_ordering(74) 00:14:05.209 fused_ordering(75) 00:14:05.209 fused_ordering(76) 00:14:05.209 fused_ordering(77) 00:14:05.209 fused_ordering(78) 00:14:05.209 fused_ordering(79) 00:14:05.209 fused_ordering(80) 00:14:05.209 fused_ordering(81) 00:14:05.209 fused_ordering(82) 00:14:05.209 fused_ordering(83) 00:14:05.209 fused_ordering(84) 00:14:05.209 fused_ordering(85) 00:14:05.209 fused_ordering(86) 00:14:05.209 fused_ordering(87) 00:14:05.209 fused_ordering(88) 00:14:05.209 fused_ordering(89) 00:14:05.209 fused_ordering(90) 00:14:05.209 fused_ordering(91) 00:14:05.209 fused_ordering(92) 00:14:05.209 fused_ordering(93) 00:14:05.209 fused_ordering(94) 00:14:05.209 fused_ordering(95) 00:14:05.209 fused_ordering(96) 00:14:05.209 fused_ordering(97) 00:14:05.209 fused_ordering(98) 00:14:05.209 fused_ordering(99) 00:14:05.209 fused_ordering(100) 00:14:05.209 fused_ordering(101) 00:14:05.209 fused_ordering(102) 00:14:05.209 fused_ordering(103) 00:14:05.209 fused_ordering(104) 00:14:05.209 fused_ordering(105) 00:14:05.209 fused_ordering(106) 00:14:05.209 fused_ordering(107) 00:14:05.209 fused_ordering(108) 00:14:05.209 fused_ordering(109) 00:14:05.209 fused_ordering(110) 00:14:05.209 fused_ordering(111) 00:14:05.209 fused_ordering(112) 00:14:05.209 fused_ordering(113) 00:14:05.209 fused_ordering(114) 00:14:05.209 fused_ordering(115) 00:14:05.209 fused_ordering(116) 00:14:05.209 fused_ordering(117) 00:14:05.209 fused_ordering(118) 00:14:05.209 fused_ordering(119) 00:14:05.209 fused_ordering(120) 00:14:05.209 fused_ordering(121) 00:14:05.209 fused_ordering(122) 00:14:05.209 fused_ordering(123) 00:14:05.209 fused_ordering(124) 00:14:05.209 fused_ordering(125) 00:14:05.209 fused_ordering(126) 00:14:05.209 fused_ordering(127) 00:14:05.209 fused_ordering(128) 00:14:05.209 fused_ordering(129) 00:14:05.209 fused_ordering(130) 00:14:05.209 fused_ordering(131) 00:14:05.209 fused_ordering(132) 00:14:05.209 fused_ordering(133) 00:14:05.209 fused_ordering(134) 00:14:05.209 fused_ordering(135) 00:14:05.209 fused_ordering(136) 00:14:05.209 fused_ordering(137) 00:14:05.209 fused_ordering(138) 00:14:05.209 fused_ordering(139) 00:14:05.209 fused_ordering(140) 00:14:05.209 fused_ordering(141) 00:14:05.209 fused_ordering(142) 00:14:05.209 fused_ordering(143) 00:14:05.209 fused_ordering(144) 00:14:05.209 fused_ordering(145) 00:14:05.209 fused_ordering(146) 00:14:05.209 fused_ordering(147) 00:14:05.209 fused_ordering(148) 00:14:05.209 fused_ordering(149) 00:14:05.209 fused_ordering(150) 00:14:05.209 fused_ordering(151) 00:14:05.209 fused_ordering(152) 00:14:05.209 fused_ordering(153) 00:14:05.209 fused_ordering(154) 00:14:05.209 fused_ordering(155) 00:14:05.209 fused_ordering(156) 00:14:05.209 fused_ordering(157) 00:14:05.209 fused_ordering(158) 00:14:05.209 fused_ordering(159) 00:14:05.209 fused_ordering(160) 00:14:05.209 fused_ordering(161) 00:14:05.209 fused_ordering(162) 00:14:05.209 fused_ordering(163) 00:14:05.209 fused_ordering(164) 00:14:05.209 fused_ordering(165) 00:14:05.209 fused_ordering(166) 00:14:05.209 fused_ordering(167) 00:14:05.209 fused_ordering(168) 00:14:05.209 fused_ordering(169) 00:14:05.209 fused_ordering(170) 00:14:05.209 fused_ordering(171) 00:14:05.209 fused_ordering(172) 00:14:05.209 fused_ordering(173) 00:14:05.209 fused_ordering(174) 00:14:05.209 fused_ordering(175) 00:14:05.209 fused_ordering(176) 00:14:05.209 fused_ordering(177) 00:14:05.209 fused_ordering(178) 00:14:05.209 fused_ordering(179) 00:14:05.209 fused_ordering(180) 00:14:05.209 fused_ordering(181) 00:14:05.209 fused_ordering(182) 00:14:05.209 fused_ordering(183) 00:14:05.209 fused_ordering(184) 00:14:05.209 fused_ordering(185) 00:14:05.209 fused_ordering(186) 00:14:05.209 fused_ordering(187) 00:14:05.209 fused_ordering(188) 00:14:05.209 fused_ordering(189) 00:14:05.209 fused_ordering(190) 00:14:05.209 fused_ordering(191) 00:14:05.209 fused_ordering(192) 00:14:05.209 fused_ordering(193) 00:14:05.209 fused_ordering(194) 00:14:05.209 fused_ordering(195) 00:14:05.209 fused_ordering(196) 00:14:05.209 fused_ordering(197) 00:14:05.209 fused_ordering(198) 00:14:05.209 fused_ordering(199) 00:14:05.209 fused_ordering(200) 00:14:05.209 fused_ordering(201) 00:14:05.209 fused_ordering(202) 00:14:05.209 fused_ordering(203) 00:14:05.209 fused_ordering(204) 00:14:05.209 fused_ordering(205) 00:14:05.468 fused_ordering(206) 00:14:05.468 fused_ordering(207) 00:14:05.468 fused_ordering(208) 00:14:05.468 fused_ordering(209) 00:14:05.468 fused_ordering(210) 00:14:05.468 fused_ordering(211) 00:14:05.468 fused_ordering(212) 00:14:05.468 fused_ordering(213) 00:14:05.468 fused_ordering(214) 00:14:05.468 fused_ordering(215) 00:14:05.468 fused_ordering(216) 00:14:05.468 fused_ordering(217) 00:14:05.468 fused_ordering(218) 00:14:05.468 fused_ordering(219) 00:14:05.468 fused_ordering(220) 00:14:05.468 fused_ordering(221) 00:14:05.468 fused_ordering(222) 00:14:05.468 fused_ordering(223) 00:14:05.468 fused_ordering(224) 00:14:05.468 fused_ordering(225) 00:14:05.468 fused_ordering(226) 00:14:05.468 fused_ordering(227) 00:14:05.468 fused_ordering(228) 00:14:05.468 fused_ordering(229) 00:14:05.468 fused_ordering(230) 00:14:05.468 fused_ordering(231) 00:14:05.468 fused_ordering(232) 00:14:05.468 fused_ordering(233) 00:14:05.468 fused_ordering(234) 00:14:05.468 fused_ordering(235) 00:14:05.468 fused_ordering(236) 00:14:05.468 fused_ordering(237) 00:14:05.468 fused_ordering(238) 00:14:05.468 fused_ordering(239) 00:14:05.468 fused_ordering(240) 00:14:05.468 fused_ordering(241) 00:14:05.468 fused_ordering(242) 00:14:05.468 fused_ordering(243) 00:14:05.468 fused_ordering(244) 00:14:05.468 fused_ordering(245) 00:14:05.468 fused_ordering(246) 00:14:05.468 fused_ordering(247) 00:14:05.468 fused_ordering(248) 00:14:05.468 fused_ordering(249) 00:14:05.468 fused_ordering(250) 00:14:05.468 fused_ordering(251) 00:14:05.468 fused_ordering(252) 00:14:05.468 fused_ordering(253) 00:14:05.468 fused_ordering(254) 00:14:05.468 fused_ordering(255) 00:14:05.468 fused_ordering(256) 00:14:05.468 fused_ordering(257) 00:14:05.468 fused_ordering(258) 00:14:05.468 fused_ordering(259) 00:14:05.468 fused_ordering(260) 00:14:05.468 fused_ordering(261) 00:14:05.468 fused_ordering(262) 00:14:05.468 fused_ordering(263) 00:14:05.468 fused_ordering(264) 00:14:05.468 fused_ordering(265) 00:14:05.468 fused_ordering(266) 00:14:05.468 fused_ordering(267) 00:14:05.469 fused_ordering(268) 00:14:05.469 fused_ordering(269) 00:14:05.469 fused_ordering(270) 00:14:05.469 fused_ordering(271) 00:14:05.469 fused_ordering(272) 00:14:05.469 fused_ordering(273) 00:14:05.469 fused_ordering(274) 00:14:05.469 fused_ordering(275) 00:14:05.469 fused_ordering(276) 00:14:05.469 fused_ordering(277) 00:14:05.469 fused_ordering(278) 00:14:05.469 fused_ordering(279) 00:14:05.469 fused_ordering(280) 00:14:05.469 fused_ordering(281) 00:14:05.469 fused_ordering(282) 00:14:05.469 fused_ordering(283) 00:14:05.469 fused_ordering(284) 00:14:05.469 fused_ordering(285) 00:14:05.469 fused_ordering(286) 00:14:05.469 fused_ordering(287) 00:14:05.469 fused_ordering(288) 00:14:05.469 fused_ordering(289) 00:14:05.469 fused_ordering(290) 00:14:05.469 fused_ordering(291) 00:14:05.469 fused_ordering(292) 00:14:05.469 fused_ordering(293) 00:14:05.469 fused_ordering(294) 00:14:05.469 fused_ordering(295) 00:14:05.469 fused_ordering(296) 00:14:05.469 fused_ordering(297) 00:14:05.469 fused_ordering(298) 00:14:05.469 fused_ordering(299) 00:14:05.469 fused_ordering(300) 00:14:05.469 fused_ordering(301) 00:14:05.469 fused_ordering(302) 00:14:05.469 fused_ordering(303) 00:14:05.469 fused_ordering(304) 00:14:05.469 fused_ordering(305) 00:14:05.469 fused_ordering(306) 00:14:05.469 fused_ordering(307) 00:14:05.469 fused_ordering(308) 00:14:05.469 fused_ordering(309) 00:14:05.469 fused_ordering(310) 00:14:05.469 fused_ordering(311) 00:14:05.469 fused_ordering(312) 00:14:05.469 fused_ordering(313) 00:14:05.469 fused_ordering(314) 00:14:05.469 fused_ordering(315) 00:14:05.469 fused_ordering(316) 00:14:05.469 fused_ordering(317) 00:14:05.469 fused_ordering(318) 00:14:05.469 fused_ordering(319) 00:14:05.469 fused_ordering(320) 00:14:05.469 fused_ordering(321) 00:14:05.469 fused_ordering(322) 00:14:05.469 fused_ordering(323) 00:14:05.469 fused_ordering(324) 00:14:05.469 fused_ordering(325) 00:14:05.469 fused_ordering(326) 00:14:05.469 fused_ordering(327) 00:14:05.469 fused_ordering(328) 00:14:05.469 fused_ordering(329) 00:14:05.469 fused_ordering(330) 00:14:05.469 fused_ordering(331) 00:14:05.469 fused_ordering(332) 00:14:05.469 fused_ordering(333) 00:14:05.469 fused_ordering(334) 00:14:05.469 fused_ordering(335) 00:14:05.469 fused_ordering(336) 00:14:05.469 fused_ordering(337) 00:14:05.469 fused_ordering(338) 00:14:05.469 fused_ordering(339) 00:14:05.469 fused_ordering(340) 00:14:05.469 fused_ordering(341) 00:14:05.469 fused_ordering(342) 00:14:05.469 fused_ordering(343) 00:14:05.469 fused_ordering(344) 00:14:05.469 fused_ordering(345) 00:14:05.469 fused_ordering(346) 00:14:05.469 fused_ordering(347) 00:14:05.469 fused_ordering(348) 00:14:05.469 fused_ordering(349) 00:14:05.469 fused_ordering(350) 00:14:05.469 fused_ordering(351) 00:14:05.469 fused_ordering(352) 00:14:05.469 fused_ordering(353) 00:14:05.469 fused_ordering(354) 00:14:05.469 fused_ordering(355) 00:14:05.469 fused_ordering(356) 00:14:05.469 fused_ordering(357) 00:14:05.469 fused_ordering(358) 00:14:05.469 fused_ordering(359) 00:14:05.469 fused_ordering(360) 00:14:05.469 fused_ordering(361) 00:14:05.469 fused_ordering(362) 00:14:05.469 fused_ordering(363) 00:14:05.469 fused_ordering(364) 00:14:05.469 fused_ordering(365) 00:14:05.469 fused_ordering(366) 00:14:05.469 fused_ordering(367) 00:14:05.469 fused_ordering(368) 00:14:05.469 fused_ordering(369) 00:14:05.469 fused_ordering(370) 00:14:05.469 fused_ordering(371) 00:14:05.469 fused_ordering(372) 00:14:05.469 fused_ordering(373) 00:14:05.469 fused_ordering(374) 00:14:05.469 fused_ordering(375) 00:14:05.469 fused_ordering(376) 00:14:05.469 fused_ordering(377) 00:14:05.469 fused_ordering(378) 00:14:05.469 fused_ordering(379) 00:14:05.469 fused_ordering(380) 00:14:05.469 fused_ordering(381) 00:14:05.469 fused_ordering(382) 00:14:05.469 fused_ordering(383) 00:14:05.469 fused_ordering(384) 00:14:05.469 fused_ordering(385) 00:14:05.469 fused_ordering(386) 00:14:05.469 fused_ordering(387) 00:14:05.469 fused_ordering(388) 00:14:05.469 fused_ordering(389) 00:14:05.469 fused_ordering(390) 00:14:05.469 fused_ordering(391) 00:14:05.469 fused_ordering(392) 00:14:05.469 fused_ordering(393) 00:14:05.469 fused_ordering(394) 00:14:05.469 fused_ordering(395) 00:14:05.469 fused_ordering(396) 00:14:05.469 fused_ordering(397) 00:14:05.469 fused_ordering(398) 00:14:05.469 fused_ordering(399) 00:14:05.469 fused_ordering(400) 00:14:05.469 fused_ordering(401) 00:14:05.469 fused_ordering(402) 00:14:05.469 fused_ordering(403) 00:14:05.469 fused_ordering(404) 00:14:05.469 fused_ordering(405) 00:14:05.469 fused_ordering(406) 00:14:05.469 fused_ordering(407) 00:14:05.469 fused_ordering(408) 00:14:05.469 fused_ordering(409) 00:14:05.469 fused_ordering(410) 00:14:05.728 fused_ordering(411) 00:14:05.728 fused_ordering(412) 00:14:05.728 fused_ordering(413) 00:14:05.728 fused_ordering(414) 00:14:05.728 fused_ordering(415) 00:14:05.728 fused_ordering(416) 00:14:05.728 fused_ordering(417) 00:14:05.728 fused_ordering(418) 00:14:05.728 fused_ordering(419) 00:14:05.728 fused_ordering(420) 00:14:05.728 fused_ordering(421) 00:14:05.728 fused_ordering(422) 00:14:05.728 fused_ordering(423) 00:14:05.728 fused_ordering(424) 00:14:05.728 fused_ordering(425) 00:14:05.728 fused_ordering(426) 00:14:05.728 fused_ordering(427) 00:14:05.728 fused_ordering(428) 00:14:05.728 fused_ordering(429) 00:14:05.728 fused_ordering(430) 00:14:05.728 fused_ordering(431) 00:14:05.728 fused_ordering(432) 00:14:05.728 fused_ordering(433) 00:14:05.728 fused_ordering(434) 00:14:05.728 fused_ordering(435) 00:14:05.728 fused_ordering(436) 00:14:05.728 fused_ordering(437) 00:14:05.728 fused_ordering(438) 00:14:05.728 fused_ordering(439) 00:14:05.728 fused_ordering(440) 00:14:05.728 fused_ordering(441) 00:14:05.728 fused_ordering(442) 00:14:05.728 fused_ordering(443) 00:14:05.728 fused_ordering(444) 00:14:05.728 fused_ordering(445) 00:14:05.728 fused_ordering(446) 00:14:05.728 fused_ordering(447) 00:14:05.728 fused_ordering(448) 00:14:05.728 fused_ordering(449) 00:14:05.728 fused_ordering(450) 00:14:05.728 fused_ordering(451) 00:14:05.728 fused_ordering(452) 00:14:05.728 fused_ordering(453) 00:14:05.728 fused_ordering(454) 00:14:05.728 fused_ordering(455) 00:14:05.728 fused_ordering(456) 00:14:05.728 fused_ordering(457) 00:14:05.728 fused_ordering(458) 00:14:05.728 fused_ordering(459) 00:14:05.728 fused_ordering(460) 00:14:05.728 fused_ordering(461) 00:14:05.728 fused_ordering(462) 00:14:05.728 fused_ordering(463) 00:14:05.728 fused_ordering(464) 00:14:05.728 fused_ordering(465) 00:14:05.728 fused_ordering(466) 00:14:05.728 fused_ordering(467) 00:14:05.728 fused_ordering(468) 00:14:05.728 fused_ordering(469) 00:14:05.728 fused_ordering(470) 00:14:05.728 fused_ordering(471) 00:14:05.728 fused_ordering(472) 00:14:05.728 fused_ordering(473) 00:14:05.728 fused_ordering(474) 00:14:05.728 fused_ordering(475) 00:14:05.728 fused_ordering(476) 00:14:05.728 fused_ordering(477) 00:14:05.728 fused_ordering(478) 00:14:05.728 fused_ordering(479) 00:14:05.728 fused_ordering(480) 00:14:05.728 fused_ordering(481) 00:14:05.728 fused_ordering(482) 00:14:05.728 fused_ordering(483) 00:14:05.728 fused_ordering(484) 00:14:05.728 fused_ordering(485) 00:14:05.728 fused_ordering(486) 00:14:05.728 fused_ordering(487) 00:14:05.728 fused_ordering(488) 00:14:05.728 fused_ordering(489) 00:14:05.728 fused_ordering(490) 00:14:05.728 fused_ordering(491) 00:14:05.728 fused_ordering(492) 00:14:05.728 fused_ordering(493) 00:14:05.728 fused_ordering(494) 00:14:05.728 fused_ordering(495) 00:14:05.728 fused_ordering(496) 00:14:05.728 fused_ordering(497) 00:14:05.728 fused_ordering(498) 00:14:05.728 fused_ordering(499) 00:14:05.728 fused_ordering(500) 00:14:05.728 fused_ordering(501) 00:14:05.728 fused_ordering(502) 00:14:05.728 fused_ordering(503) 00:14:05.728 fused_ordering(504) 00:14:05.728 fused_ordering(505) 00:14:05.728 fused_ordering(506) 00:14:05.728 fused_ordering(507) 00:14:05.728 fused_ordering(508) 00:14:05.728 fused_ordering(509) 00:14:05.728 fused_ordering(510) 00:14:05.728 fused_ordering(511) 00:14:05.728 fused_ordering(512) 00:14:05.728 fused_ordering(513) 00:14:05.728 fused_ordering(514) 00:14:05.728 fused_ordering(515) 00:14:05.728 fused_ordering(516) 00:14:05.728 fused_ordering(517) 00:14:05.728 fused_ordering(518) 00:14:05.728 fused_ordering(519) 00:14:05.728 fused_ordering(520) 00:14:05.728 fused_ordering(521) 00:14:05.728 fused_ordering(522) 00:14:05.728 fused_ordering(523) 00:14:05.728 fused_ordering(524) 00:14:05.728 fused_ordering(525) 00:14:05.728 fused_ordering(526) 00:14:05.728 fused_ordering(527) 00:14:05.728 fused_ordering(528) 00:14:05.728 fused_ordering(529) 00:14:05.728 fused_ordering(530) 00:14:05.728 fused_ordering(531) 00:14:05.728 fused_ordering(532) 00:14:05.728 fused_ordering(533) 00:14:05.728 fused_ordering(534) 00:14:05.728 fused_ordering(535) 00:14:05.728 fused_ordering(536) 00:14:05.728 fused_ordering(537) 00:14:05.728 fused_ordering(538) 00:14:05.728 fused_ordering(539) 00:14:05.728 fused_ordering(540) 00:14:05.728 fused_ordering(541) 00:14:05.728 fused_ordering(542) 00:14:05.728 fused_ordering(543) 00:14:05.728 fused_ordering(544) 00:14:05.728 fused_ordering(545) 00:14:05.728 fused_ordering(546) 00:14:05.728 fused_ordering(547) 00:14:05.728 fused_ordering(548) 00:14:05.728 fused_ordering(549) 00:14:05.728 fused_ordering(550) 00:14:05.728 fused_ordering(551) 00:14:05.728 fused_ordering(552) 00:14:05.728 fused_ordering(553) 00:14:05.728 fused_ordering(554) 00:14:05.728 fused_ordering(555) 00:14:05.728 fused_ordering(556) 00:14:05.728 fused_ordering(557) 00:14:05.728 fused_ordering(558) 00:14:05.728 fused_ordering(559) 00:14:05.728 fused_ordering(560) 00:14:05.728 fused_ordering(561) 00:14:05.728 fused_ordering(562) 00:14:05.728 fused_ordering(563) 00:14:05.728 fused_ordering(564) 00:14:05.728 fused_ordering(565) 00:14:05.728 fused_ordering(566) 00:14:05.728 fused_ordering(567) 00:14:05.728 fused_ordering(568) 00:14:05.728 fused_ordering(569) 00:14:05.728 fused_ordering(570) 00:14:05.728 fused_ordering(571) 00:14:05.728 fused_ordering(572) 00:14:05.728 fused_ordering(573) 00:14:05.728 fused_ordering(574) 00:14:05.728 fused_ordering(575) 00:14:05.728 fused_ordering(576) 00:14:05.728 fused_ordering(577) 00:14:05.728 fused_ordering(578) 00:14:05.728 fused_ordering(579) 00:14:05.728 fused_ordering(580) 00:14:05.728 fused_ordering(581) 00:14:05.728 fused_ordering(582) 00:14:05.728 fused_ordering(583) 00:14:05.728 fused_ordering(584) 00:14:05.728 fused_ordering(585) 00:14:05.728 fused_ordering(586) 00:14:05.728 fused_ordering(587) 00:14:05.728 fused_ordering(588) 00:14:05.728 fused_ordering(589) 00:14:05.728 fused_ordering(590) 00:14:05.728 fused_ordering(591) 00:14:05.728 fused_ordering(592) 00:14:05.728 fused_ordering(593) 00:14:05.728 fused_ordering(594) 00:14:05.728 fused_ordering(595) 00:14:05.728 fused_ordering(596) 00:14:05.728 fused_ordering(597) 00:14:05.728 fused_ordering(598) 00:14:05.728 fused_ordering(599) 00:14:05.728 fused_ordering(600) 00:14:05.728 fused_ordering(601) 00:14:05.728 fused_ordering(602) 00:14:05.729 fused_ordering(603) 00:14:05.729 fused_ordering(604) 00:14:05.729 fused_ordering(605) 00:14:05.729 fused_ordering(606) 00:14:05.729 fused_ordering(607) 00:14:05.729 fused_ordering(608) 00:14:05.729 fused_ordering(609) 00:14:05.729 fused_ordering(610) 00:14:05.729 fused_ordering(611) 00:14:05.729 fused_ordering(612) 00:14:05.729 fused_ordering(613) 00:14:05.729 fused_ordering(614) 00:14:05.729 fused_ordering(615) 00:14:05.988 fused_ordering(616) 00:14:05.988 fused_ordering(617) 00:14:05.988 fused_ordering(618) 00:14:05.988 fused_ordering(619) 00:14:05.988 fused_ordering(620) 00:14:05.988 fused_ordering(621) 00:14:05.988 fused_ordering(622) 00:14:05.988 fused_ordering(623) 00:14:05.988 fused_ordering(624) 00:14:05.988 fused_ordering(625) 00:14:05.988 fused_ordering(626) 00:14:05.988 fused_ordering(627) 00:14:05.988 fused_ordering(628) 00:14:05.988 fused_ordering(629) 00:14:05.988 fused_ordering(630) 00:14:05.988 fused_ordering(631) 00:14:05.988 fused_ordering(632) 00:14:05.988 fused_ordering(633) 00:14:05.988 fused_ordering(634) 00:14:05.988 fused_ordering(635) 00:14:05.988 fused_ordering(636) 00:14:05.988 fused_ordering(637) 00:14:05.988 fused_ordering(638) 00:14:05.988 fused_ordering(639) 00:14:05.988 fused_ordering(640) 00:14:05.988 fused_ordering(641) 00:14:05.988 fused_ordering(642) 00:14:05.988 fused_ordering(643) 00:14:05.988 fused_ordering(644) 00:14:05.988 fused_ordering(645) 00:14:05.988 fused_ordering(646) 00:14:05.988 fused_ordering(647) 00:14:05.988 fused_ordering(648) 00:14:05.988 fused_ordering(649) 00:14:05.988 fused_ordering(650) 00:14:05.988 fused_ordering(651) 00:14:05.988 fused_ordering(652) 00:14:05.988 fused_ordering(653) 00:14:05.988 fused_ordering(654) 00:14:05.988 fused_ordering(655) 00:14:05.988 fused_ordering(656) 00:14:05.988 fused_ordering(657) 00:14:05.988 fused_ordering(658) 00:14:05.988 fused_ordering(659) 00:14:05.988 fused_ordering(660) 00:14:05.988 fused_ordering(661) 00:14:05.988 fused_ordering(662) 00:14:05.988 fused_ordering(663) 00:14:05.988 fused_ordering(664) 00:14:05.988 fused_ordering(665) 00:14:05.988 fused_ordering(666) 00:14:05.988 fused_ordering(667) 00:14:05.988 fused_ordering(668) 00:14:05.988 fused_ordering(669) 00:14:05.988 fused_ordering(670) 00:14:05.988 fused_ordering(671) 00:14:05.988 fused_ordering(672) 00:14:05.988 fused_ordering(673) 00:14:05.988 fused_ordering(674) 00:14:05.988 fused_ordering(675) 00:14:05.988 fused_ordering(676) 00:14:05.988 fused_ordering(677) 00:14:05.988 fused_ordering(678) 00:14:05.988 fused_ordering(679) 00:14:05.988 fused_ordering(680) 00:14:05.988 fused_ordering(681) 00:14:05.988 fused_ordering(682) 00:14:05.988 fused_ordering(683) 00:14:05.988 fused_ordering(684) 00:14:05.988 fused_ordering(685) 00:14:05.988 fused_ordering(686) 00:14:05.988 fused_ordering(687) 00:14:05.988 fused_ordering(688) 00:14:05.988 fused_ordering(689) 00:14:05.988 fused_ordering(690) 00:14:05.988 fused_ordering(691) 00:14:05.988 fused_ordering(692) 00:14:05.988 fused_ordering(693) 00:14:05.988 fused_ordering(694) 00:14:05.988 fused_ordering(695) 00:14:05.988 fused_ordering(696) 00:14:05.988 fused_ordering(697) 00:14:05.988 fused_ordering(698) 00:14:05.988 fused_ordering(699) 00:14:05.988 fused_ordering(700) 00:14:05.988 fused_ordering(701) 00:14:05.988 fused_ordering(702) 00:14:05.988 fused_ordering(703) 00:14:05.988 fused_ordering(704) 00:14:05.988 fused_ordering(705) 00:14:05.988 fused_ordering(706) 00:14:05.988 fused_ordering(707) 00:14:05.988 fused_ordering(708) 00:14:05.988 fused_ordering(709) 00:14:05.988 fused_ordering(710) 00:14:05.988 fused_ordering(711) 00:14:05.988 fused_ordering(712) 00:14:05.988 fused_ordering(713) 00:14:05.988 fused_ordering(714) 00:14:05.988 fused_ordering(715) 00:14:05.988 fused_ordering(716) 00:14:05.988 fused_ordering(717) 00:14:05.988 fused_ordering(718) 00:14:05.988 fused_ordering(719) 00:14:05.988 fused_ordering(720) 00:14:05.988 fused_ordering(721) 00:14:05.988 fused_ordering(722) 00:14:05.988 fused_ordering(723) 00:14:05.988 fused_ordering(724) 00:14:05.988 fused_ordering(725) 00:14:05.988 fused_ordering(726) 00:14:05.988 fused_ordering(727) 00:14:05.988 fused_ordering(728) 00:14:05.988 fused_ordering(729) 00:14:05.988 fused_ordering(730) 00:14:05.988 fused_ordering(731) 00:14:05.988 fused_ordering(732) 00:14:05.988 fused_ordering(733) 00:14:05.988 fused_ordering(734) 00:14:05.988 fused_ordering(735) 00:14:05.988 fused_ordering(736) 00:14:05.988 fused_ordering(737) 00:14:05.988 fused_ordering(738) 00:14:05.988 fused_ordering(739) 00:14:05.988 fused_ordering(740) 00:14:05.988 fused_ordering(741) 00:14:05.988 fused_ordering(742) 00:14:05.988 fused_ordering(743) 00:14:05.988 fused_ordering(744) 00:14:05.988 fused_ordering(745) 00:14:05.988 fused_ordering(746) 00:14:05.988 fused_ordering(747) 00:14:05.988 fused_ordering(748) 00:14:05.988 fused_ordering(749) 00:14:05.988 fused_ordering(750) 00:14:05.988 fused_ordering(751) 00:14:05.988 fused_ordering(752) 00:14:05.988 fused_ordering(753) 00:14:05.988 fused_ordering(754) 00:14:05.988 fused_ordering(755) 00:14:05.988 fused_ordering(756) 00:14:05.988 fused_ordering(757) 00:14:05.988 fused_ordering(758) 00:14:05.988 fused_ordering(759) 00:14:05.988 fused_ordering(760) 00:14:05.988 fused_ordering(761) 00:14:05.988 fused_ordering(762) 00:14:05.988 fused_ordering(763) 00:14:05.988 fused_ordering(764) 00:14:05.988 fused_ordering(765) 00:14:05.988 fused_ordering(766) 00:14:05.988 fused_ordering(767) 00:14:05.988 fused_ordering(768) 00:14:05.988 fused_ordering(769) 00:14:05.988 fused_ordering(770) 00:14:05.988 fused_ordering(771) 00:14:05.988 fused_ordering(772) 00:14:05.988 fused_ordering(773) 00:14:05.988 fused_ordering(774) 00:14:05.988 fused_ordering(775) 00:14:05.988 fused_ordering(776) 00:14:05.988 fused_ordering(777) 00:14:05.988 fused_ordering(778) 00:14:05.988 fused_ordering(779) 00:14:05.988 fused_ordering(780) 00:14:05.988 fused_ordering(781) 00:14:05.988 fused_ordering(782) 00:14:05.988 fused_ordering(783) 00:14:05.988 fused_ordering(784) 00:14:05.988 fused_ordering(785) 00:14:05.988 fused_ordering(786) 00:14:05.988 fused_ordering(787) 00:14:05.988 fused_ordering(788) 00:14:05.988 fused_ordering(789) 00:14:05.988 fused_ordering(790) 00:14:05.988 fused_ordering(791) 00:14:05.988 fused_ordering(792) 00:14:05.988 fused_ordering(793) 00:14:05.988 fused_ordering(794) 00:14:05.988 fused_ordering(795) 00:14:05.988 fused_ordering(796) 00:14:05.988 fused_ordering(797) 00:14:05.988 fused_ordering(798) 00:14:05.988 fused_ordering(799) 00:14:05.988 fused_ordering(800) 00:14:05.988 fused_ordering(801) 00:14:05.988 fused_ordering(802) 00:14:05.988 fused_ordering(803) 00:14:05.988 fused_ordering(804) 00:14:05.989 fused_ordering(805) 00:14:05.989 fused_ordering(806) 00:14:05.989 fused_ordering(807) 00:14:05.989 fused_ordering(808) 00:14:05.989 fused_ordering(809) 00:14:05.989 fused_ordering(810) 00:14:05.989 fused_ordering(811) 00:14:05.989 fused_ordering(812) 00:14:05.989 fused_ordering(813) 00:14:05.989 fused_ordering(814) 00:14:05.989 fused_ordering(815) 00:14:05.989 fused_ordering(816) 00:14:05.989 fused_ordering(817) 00:14:05.989 fused_ordering(818) 00:14:05.989 fused_ordering(819) 00:14:05.989 fused_ordering(820) 00:14:06.557 fused_ordering(821) 00:14:06.557 fused_ordering(822) 00:14:06.557 fused_ordering(823) 00:14:06.557 fused_ordering(824) 00:14:06.557 fused_ordering(825) 00:14:06.557 fused_ordering(826) 00:14:06.557 fused_ordering(827) 00:14:06.557 fused_ordering(828) 00:14:06.557 fused_ordering(829) 00:14:06.557 fused_ordering(830) 00:14:06.557 fused_ordering(831) 00:14:06.557 fused_ordering(832) 00:14:06.557 fused_ordering(833) 00:14:06.557 fused_ordering(834) 00:14:06.557 fused_ordering(835) 00:14:06.557 fused_ordering(836) 00:14:06.557 fused_ordering(837) 00:14:06.557 fused_ordering(838) 00:14:06.557 fused_ordering(839) 00:14:06.557 fused_ordering(840) 00:14:06.557 fused_ordering(841) 00:14:06.557 fused_ordering(842) 00:14:06.557 fused_ordering(843) 00:14:06.557 fused_ordering(844) 00:14:06.557 fused_ordering(845) 00:14:06.557 fused_ordering(846) 00:14:06.557 fused_ordering(847) 00:14:06.557 fused_ordering(848) 00:14:06.557 fused_ordering(849) 00:14:06.557 fused_ordering(850) 00:14:06.557 fused_ordering(851) 00:14:06.557 fused_ordering(852) 00:14:06.557 fused_ordering(853) 00:14:06.557 fused_ordering(854) 00:14:06.557 fused_ordering(855) 00:14:06.557 fused_ordering(856) 00:14:06.557 fused_ordering(857) 00:14:06.557 fused_ordering(858) 00:14:06.557 fused_ordering(859) 00:14:06.557 fused_ordering(860) 00:14:06.557 fused_ordering(861) 00:14:06.557 fused_ordering(862) 00:14:06.557 fused_ordering(863) 00:14:06.557 fused_ordering(864) 00:14:06.557 fused_ordering(865) 00:14:06.557 fused_ordering(866) 00:14:06.557 fused_ordering(867) 00:14:06.557 fused_ordering(868) 00:14:06.557 fused_ordering(869) 00:14:06.557 fused_ordering(870) 00:14:06.557 fused_ordering(871) 00:14:06.557 fused_ordering(872) 00:14:06.557 fused_ordering(873) 00:14:06.557 fused_ordering(874) 00:14:06.557 fused_ordering(875) 00:14:06.557 fused_ordering(876) 00:14:06.557 fused_ordering(877) 00:14:06.557 fused_ordering(878) 00:14:06.557 fused_ordering(879) 00:14:06.557 fused_ordering(880) 00:14:06.557 fused_ordering(881) 00:14:06.557 fused_ordering(882) 00:14:06.557 fused_ordering(883) 00:14:06.557 fused_ordering(884) 00:14:06.557 fused_ordering(885) 00:14:06.557 fused_ordering(886) 00:14:06.557 fused_ordering(887) 00:14:06.557 fused_ordering(888) 00:14:06.557 fused_ordering(889) 00:14:06.558 fused_ordering(890) 00:14:06.558 fused_ordering(891) 00:14:06.558 fused_ordering(892) 00:14:06.558 fused_ordering(893) 00:14:06.558 fused_ordering(894) 00:14:06.558 fused_ordering(895) 00:14:06.558 fused_ordering(896) 00:14:06.558 fused_ordering(897) 00:14:06.558 fused_ordering(898) 00:14:06.558 fused_ordering(899) 00:14:06.558 fused_ordering(900) 00:14:06.558 fused_ordering(901) 00:14:06.558 fused_ordering(902) 00:14:06.558 fused_ordering(903) 00:14:06.558 fused_ordering(904) 00:14:06.558 fused_ordering(905) 00:14:06.558 fused_ordering(906) 00:14:06.558 fused_ordering(907) 00:14:06.558 fused_ordering(908) 00:14:06.558 fused_ordering(909) 00:14:06.558 fused_ordering(910) 00:14:06.558 fused_ordering(911) 00:14:06.558 fused_ordering(912) 00:14:06.558 fused_ordering(913) 00:14:06.558 fused_ordering(914) 00:14:06.558 fused_ordering(915) 00:14:06.558 fused_ordering(916) 00:14:06.558 fused_ordering(917) 00:14:06.558 fused_ordering(918) 00:14:06.558 fused_ordering(919) 00:14:06.558 fused_ordering(920) 00:14:06.558 fused_ordering(921) 00:14:06.558 fused_ordering(922) 00:14:06.558 fused_ordering(923) 00:14:06.558 fused_ordering(924) 00:14:06.558 fused_ordering(925) 00:14:06.558 fused_ordering(926) 00:14:06.558 fused_ordering(927) 00:14:06.558 fused_ordering(928) 00:14:06.558 fused_ordering(929) 00:14:06.558 fused_ordering(930) 00:14:06.558 fused_ordering(931) 00:14:06.558 fused_ordering(932) 00:14:06.558 fused_ordering(933) 00:14:06.558 fused_ordering(934) 00:14:06.558 fused_ordering(935) 00:14:06.558 fused_ordering(936) 00:14:06.558 fused_ordering(937) 00:14:06.558 fused_ordering(938) 00:14:06.558 fused_ordering(939) 00:14:06.558 fused_ordering(940) 00:14:06.558 fused_ordering(941) 00:14:06.558 fused_ordering(942) 00:14:06.558 fused_ordering(943) 00:14:06.558 fused_ordering(944) 00:14:06.558 fused_ordering(945) 00:14:06.558 fused_ordering(946) 00:14:06.558 fused_ordering(947) 00:14:06.558 fused_ordering(948) 00:14:06.558 fused_ordering(949) 00:14:06.558 fused_ordering(950) 00:14:06.558 fused_ordering(951) 00:14:06.558 fused_ordering(952) 00:14:06.558 fused_ordering(953) 00:14:06.558 fused_ordering(954) 00:14:06.558 fused_ordering(955) 00:14:06.558 fused_ordering(956) 00:14:06.558 fused_ordering(957) 00:14:06.558 fused_ordering(958) 00:14:06.558 fused_ordering(959) 00:14:06.558 fused_ordering(960) 00:14:06.558 fused_ordering(961) 00:14:06.558 fused_ordering(962) 00:14:06.558 fused_ordering(963) 00:14:06.558 fused_ordering(964) 00:14:06.558 fused_ordering(965) 00:14:06.558 fused_ordering(966) 00:14:06.558 fused_ordering(967) 00:14:06.558 fused_ordering(968) 00:14:06.558 fused_ordering(969) 00:14:06.558 fused_ordering(970) 00:14:06.558 fused_ordering(971) 00:14:06.558 fused_ordering(972) 00:14:06.558 fused_ordering(973) 00:14:06.558 fused_ordering(974) 00:14:06.558 fused_ordering(975) 00:14:06.558 fused_ordering(976) 00:14:06.558 fused_ordering(977) 00:14:06.558 fused_ordering(978) 00:14:06.558 fused_ordering(979) 00:14:06.558 fused_ordering(980) 00:14:06.558 fused_ordering(981) 00:14:06.558 fused_ordering(982) 00:14:06.558 fused_ordering(983) 00:14:06.558 fused_ordering(984) 00:14:06.558 fused_ordering(985) 00:14:06.558 fused_ordering(986) 00:14:06.558 fused_ordering(987) 00:14:06.558 fused_ordering(988) 00:14:06.558 fused_ordering(989) 00:14:06.558 fused_ordering(990) 00:14:06.558 fused_ordering(991) 00:14:06.558 fused_ordering(992) 00:14:06.558 fused_ordering(993) 00:14:06.558 fused_ordering(994) 00:14:06.558 fused_ordering(995) 00:14:06.558 fused_ordering(996) 00:14:06.558 fused_ordering(997) 00:14:06.558 fused_ordering(998) 00:14:06.558 fused_ordering(999) 00:14:06.558 fused_ordering(1000) 00:14:06.558 fused_ordering(1001) 00:14:06.558 fused_ordering(1002) 00:14:06.558 fused_ordering(1003) 00:14:06.558 fused_ordering(1004) 00:14:06.558 fused_ordering(1005) 00:14:06.558 fused_ordering(1006) 00:14:06.558 fused_ordering(1007) 00:14:06.558 fused_ordering(1008) 00:14:06.558 fused_ordering(1009) 00:14:06.558 fused_ordering(1010) 00:14:06.558 fused_ordering(1011) 00:14:06.558 fused_ordering(1012) 00:14:06.558 fused_ordering(1013) 00:14:06.558 fused_ordering(1014) 00:14:06.558 fused_ordering(1015) 00:14:06.558 fused_ordering(1016) 00:14:06.558 fused_ordering(1017) 00:14:06.558 fused_ordering(1018) 00:14:06.558 fused_ordering(1019) 00:14:06.558 fused_ordering(1020) 00:14:06.558 fused_ordering(1021) 00:14:06.558 fused_ordering(1022) 00:14:06.558 fused_ordering(1023) 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:06.558 rmmod nvme_tcp 00:14:06.558 rmmod nvme_fabrics 00:14:06.558 rmmod nvme_keyring 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1259641 ']' 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1259641 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1259641 ']' 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1259641 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1259641 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1259641' 00:14:06.558 killing process with pid 1259641 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1259641 00:14:06.558 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1259641 00:14:06.818 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:06.818 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:06.818 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:06.818 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:06.818 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:06.818 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:06.818 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:06.818 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:06.818 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:06.818 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.818 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.818 21:07:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.353 21:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:09.353 00:14:09.353 real 0m10.828s 00:14:09.353 user 0m5.222s 00:14:09.353 sys 0m5.867s 00:14:09.353 21:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.353 21:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.353 ************************************ 00:14:09.353 END TEST nvmf_fused_ordering 00:14:09.353 ************************************ 00:14:09.353 21:07:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:09.353 21:07:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:09.353 21:07:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.353 21:07:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:09.353 ************************************ 00:14:09.353 START TEST nvmf_ns_masking 00:14:09.353 ************************************ 00:14:09.353 21:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:09.353 * Looking for test storage... 00:14:09.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:09.353 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:09.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.354 --rc genhtml_branch_coverage=1 00:14:09.354 --rc genhtml_function_coverage=1 00:14:09.354 --rc genhtml_legend=1 00:14:09.354 --rc geninfo_all_blocks=1 00:14:09.354 --rc geninfo_unexecuted_blocks=1 00:14:09.354 00:14:09.354 ' 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:09.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.354 --rc genhtml_branch_coverage=1 00:14:09.354 --rc genhtml_function_coverage=1 00:14:09.354 --rc genhtml_legend=1 00:14:09.354 --rc geninfo_all_blocks=1 00:14:09.354 --rc geninfo_unexecuted_blocks=1 00:14:09.354 00:14:09.354 ' 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:09.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.354 --rc genhtml_branch_coverage=1 00:14:09.354 --rc genhtml_function_coverage=1 00:14:09.354 --rc genhtml_legend=1 00:14:09.354 --rc geninfo_all_blocks=1 00:14:09.354 --rc geninfo_unexecuted_blocks=1 00:14:09.354 00:14:09.354 ' 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:09.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.354 --rc genhtml_branch_coverage=1 00:14:09.354 --rc genhtml_function_coverage=1 00:14:09.354 --rc genhtml_legend=1 00:14:09.354 --rc geninfo_all_blocks=1 00:14:09.354 --rc geninfo_unexecuted_blocks=1 00:14:09.354 00:14:09.354 ' 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:09.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=8f5a0f40-bd79-46dc-9a8b-d67aa4a0cedf 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=0d21099a-e94c-4bc4-85f2-e5e2b8a17ac7 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:09.354 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:09.355 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:09.355 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:09.355 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a8cc1acd-9b1d-4bb6-9875-9ea2c50f13d3 00:14:09.355 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:09.355 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:09.355 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.355 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:09.355 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:09.355 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:09.355 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.355 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.355 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.355 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:09.355 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:09.355 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:09.355 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:15.926 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:15.926 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:15.926 Found net devices under 0000:86:00.0: cvl_0_0 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.926 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:15.927 Found net devices under 0000:86:00.1: cvl_0_1 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:15.927 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:15.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:14:15.927 00:14:15.927 --- 10.0.0.2 ping statistics --- 00:14:15.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.927 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:15.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:14:15.927 00:14:15.927 --- 10.0.0.1 ping statistics --- 00:14:15.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.927 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1263645 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1263645 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1263645 ']' 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:15.927 [2024-12-05 21:07:23.175641] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:14:15.927 [2024-12-05 21:07:23.175686] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.927 [2024-12-05 21:07:23.252112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.927 [2024-12-05 21:07:23.292341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.927 [2024-12-05 21:07:23.292381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.927 [2024-12-05 21:07:23.292388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.927 [2024-12-05 21:07:23.292394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.927 [2024-12-05 21:07:23.292400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.927 [2024-12-05 21:07:23.292932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:15.927 [2024-12-05 21:07:23.585647] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:15.927 Malloc1 00:14:15.927 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:15.927 Malloc2 00:14:16.187 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:16.187 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:16.446 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.705 [2024-12-05 21:07:24.619770] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.705 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:16.705 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a8cc1acd-9b1d-4bb6-9875-9ea2c50f13d3 -a 10.0.0.2 -s 4420 -i 4 00:14:16.705 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:16.705 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:16.705 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.705 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:16.705 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:19.235 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:19.235 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:19.235 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:19.235 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:19.235 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:19.235 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:19.235 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:19.235 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:19.235 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:19.235 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:19.235 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:19.235 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.235 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:19.235 [ 0]:0x1 00:14:19.235 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:19.235 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.235 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b9d4fe0202d74f3bafcd0d4e9bc73a61 00:14:19.235 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b9d4fe0202d74f3bafcd0d4e9bc73a61 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.235 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:19.235 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:19.235 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.235 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:19.235 [ 0]:0x1 00:14:19.235 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:19.235 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.235 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b9d4fe0202d74f3bafcd0d4e9bc73a61 00:14:19.235 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b9d4fe0202d74f3bafcd0d4e9bc73a61 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.235 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:19.235 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:19.235 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.235 [ 1]:0x2 00:14:19.235 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:19.235 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.235 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0cb30214a7f246d0bec71f5607ad92c0 00:14:19.235 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0cb30214a7f246d0bec71f5607ad92c0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.235 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:19.235 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:19.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.235 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.494 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:19.753 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:19.753 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a8cc1acd-9b1d-4bb6-9875-9ea2c50f13d3 -a 10.0.0.2 -s 4420 -i 4 00:14:19.753 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:19.753 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:19.753 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.753 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:19.753 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:19.753 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:22.281 [ 0]:0x2 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0cb30214a7f246d0bec71f5607ad92c0 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0cb30214a7f246d0bec71f5607ad92c0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.281 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:22.281 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:22.281 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.281 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.281 [ 0]:0x1 00:14:22.281 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.281 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.281 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b9d4fe0202d74f3bafcd0d4e9bc73a61 00:14:22.281 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b9d4fe0202d74f3bafcd0d4e9bc73a61 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.281 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:22.281 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:22.281 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.281 [ 1]:0x2 00:14:22.281 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.281 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.281 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0cb30214a7f246d0bec71f5607ad92c0 00:14:22.281 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0cb30214a7f246d0bec71f5607ad92c0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.281 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:22.539 [ 0]:0x2 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0cb30214a7f246d0bec71f5607ad92c0 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0cb30214a7f246d0bec71f5607ad92c0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.539 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:22.797 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:22.797 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a8cc1acd-9b1d-4bb6-9875-9ea2c50f13d3 -a 10.0.0.2 -s 4420 -i 4 00:14:23.054 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:23.054 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:23.054 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.054 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:23.054 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:23.054 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:24.955 21:07:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:24.955 21:07:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:24.955 21:07:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:24.955 21:07:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:24.956 21:07:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:24.956 21:07:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:24.956 21:07:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:24.956 21:07:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:24.956 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:24.956 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:24.956 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:24.956 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.956 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:24.956 [ 0]:0x1 00:14:24.956 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:24.956 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.218 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b9d4fe0202d74f3bafcd0d4e9bc73a61 00:14:25.218 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b9d4fe0202d74f3bafcd0d4e9bc73a61 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.218 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:25.218 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.218 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:25.218 [ 1]:0x2 00:14:25.218 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:25.218 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.218 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0cb30214a7f246d0bec71f5607ad92c0 00:14:25.218 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0cb30214a7f246d0bec71f5607ad92c0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.218 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:25.477 [ 0]:0x2 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0cb30214a7f246d0bec71f5607ad92c0 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0cb30214a7f246d0bec71f5607ad92c0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:25.477 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:25.736 [2024-12-05 21:07:33.685691] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:25.736 request: 00:14:25.736 { 00:14:25.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.736 "nsid": 2, 00:14:25.736 "host": "nqn.2016-06.io.spdk:host1", 00:14:25.736 "method": "nvmf_ns_remove_host", 00:14:25.736 "req_id": 1 00:14:25.736 } 00:14:25.736 Got JSON-RPC error response 00:14:25.736 response: 00:14:25.736 { 00:14:25.736 "code": -32602, 00:14:25.736 "message": "Invalid parameters" 00:14:25.736 } 00:14:25.736 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:25.736 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:25.737 [ 0]:0x2 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0cb30214a7f246d0bec71f5607ad92c0 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0cb30214a7f246d0bec71f5607ad92c0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:25.737 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:25.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.996 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1265424 00:14:25.996 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.996 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1265424 /var/tmp/host.sock 00:14:25.996 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:25.996 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1265424 ']' 00:14:25.996 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:25.996 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.996 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:25.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:25.996 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.996 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:25.996 [2024-12-05 21:07:33.902253] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:14:25.996 [2024-12-05 21:07:33.902300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265424 ] 00:14:25.996 [2024-12-05 21:07:33.976493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.996 [2024-12-05 21:07:34.018200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.255 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:26.255 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:26.255 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.514 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:26.773 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 8f5a0f40-bd79-46dc-9a8b-d67aa4a0cedf 00:14:26.773 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:26.773 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8F5A0F40BD7946DC9A8BD67AA4A0CEDF -i 00:14:26.773 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 0d21099a-e94c-4bc4-85f2-e5e2b8a17ac7 00:14:26.773 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:26.773 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 0D21099AE94C4BC485F2E5E2B8A17AC7 -i 00:14:27.059 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:27.316 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:27.317 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:27.317 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:27.882 nvme0n1 00:14:27.882 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:27.882 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:27.882 nvme1n2 00:14:28.140 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:28.140 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:28.140 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:28.140 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:28.140 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:28.140 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:28.140 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:28.140 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:28.140 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:28.399 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 8f5a0f40-bd79-46dc-9a8b-d67aa4a0cedf == \8\f\5\a\0\f\4\0\-\b\d\7\9\-\4\6\d\c\-\9\a\8\b\-\d\6\7\a\a\4\a\0\c\e\d\f ]] 00:14:28.399 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:28.399 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:28.399 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:28.657 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 0d21099a-e94c-4bc4-85f2-e5e2b8a17ac7 == \0\d\2\1\0\9\9\a\-\e\9\4\c\-\4\b\c\4\-\8\5\f\2\-\e\5\e\2\b\8\a\1\7\a\c\7 ]] 00:14:28.657 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.915 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:28.915 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 8f5a0f40-bd79-46dc-9a8b-d67aa4a0cedf 00:14:28.915 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:28.915 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8F5A0F40BD7946DC9A8BD67AA4A0CEDF 00:14:28.915 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:28.915 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8F5A0F40BD7946DC9A8BD67AA4A0CEDF 00:14:28.915 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.915 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.915 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.915 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.915 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.915 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.915 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.915 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:28.915 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8F5A0F40BD7946DC9A8BD67AA4A0CEDF 00:14:29.172 [2024-12-05 21:07:37.159216] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:29.172 [2024-12-05 21:07:37.159246] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:29.172 [2024-12-05 21:07:37.159254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.172 request: 00:14:29.172 { 00:14:29.172 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.172 "namespace": { 00:14:29.172 "bdev_name": "invalid", 00:14:29.172 "nsid": 1, 00:14:29.172 "nguid": "8F5A0F40BD7946DC9A8BD67AA4A0CEDF", 00:14:29.172 "no_auto_visible": false, 00:14:29.172 "hide_metadata": false 00:14:29.172 }, 00:14:29.172 "method": "nvmf_subsystem_add_ns", 00:14:29.172 "req_id": 1 00:14:29.172 } 00:14:29.172 Got JSON-RPC error response 00:14:29.172 response: 00:14:29.172 { 00:14:29.172 "code": -32602, 00:14:29.172 "message": "Invalid parameters" 00:14:29.172 } 00:14:29.172 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:29.172 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:29.172 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:29.172 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:29.172 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 8f5a0f40-bd79-46dc-9a8b-d67aa4a0cedf 00:14:29.172 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:29.172 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8F5A0F40BD7946DC9A8BD67AA4A0CEDF -i 00:14:29.430 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:31.465 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:31.465 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:31.465 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:31.465 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:31.465 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1265424 00:14:31.465 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1265424 ']' 00:14:31.465 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1265424 00:14:31.465 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:31.465 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.465 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1265424 00:14:31.723 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:31.723 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:31.723 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1265424' 00:14:31.723 killing process with pid 1265424 00:14:31.723 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1265424 00:14:31.723 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1265424 00:14:31.982 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:32.241 rmmod nvme_tcp 00:14:32.241 rmmod nvme_fabrics 00:14:32.241 rmmod nvme_keyring 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1263645 ']' 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1263645 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1263645 ']' 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1263645 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1263645 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1263645' 00:14:32.241 killing process with pid 1263645 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1263645 00:14:32.241 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1263645 00:14:32.500 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:32.500 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:32.501 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:32.501 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:32.501 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:32.501 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:32.501 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:32.501 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:32.501 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:32.501 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.501 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:32.501 21:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:35.037 00:14:35.037 real 0m25.601s 00:14:35.037 user 0m30.516s 00:14:35.037 sys 0m7.035s 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:35.037 ************************************ 00:14:35.037 END TEST nvmf_ns_masking 00:14:35.037 ************************************ 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:35.037 ************************************ 00:14:35.037 START TEST nvmf_nvme_cli 00:14:35.037 ************************************ 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:35.037 * Looking for test storage... 00:14:35.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:35.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.037 --rc genhtml_branch_coverage=1 00:14:35.037 --rc genhtml_function_coverage=1 00:14:35.037 --rc genhtml_legend=1 00:14:35.037 --rc geninfo_all_blocks=1 00:14:35.037 --rc geninfo_unexecuted_blocks=1 00:14:35.037 00:14:35.037 ' 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:35.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.037 --rc genhtml_branch_coverage=1 00:14:35.037 --rc genhtml_function_coverage=1 00:14:35.037 --rc genhtml_legend=1 00:14:35.037 --rc geninfo_all_blocks=1 00:14:35.037 --rc geninfo_unexecuted_blocks=1 00:14:35.037 00:14:35.037 ' 00:14:35.037 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:35.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.037 --rc genhtml_branch_coverage=1 00:14:35.037 --rc genhtml_function_coverage=1 00:14:35.038 --rc genhtml_legend=1 00:14:35.038 --rc geninfo_all_blocks=1 00:14:35.038 --rc geninfo_unexecuted_blocks=1 00:14:35.038 00:14:35.038 ' 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:35.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.038 --rc genhtml_branch_coverage=1 00:14:35.038 --rc genhtml_function_coverage=1 00:14:35.038 --rc genhtml_legend=1 00:14:35.038 --rc geninfo_all_blocks=1 00:14:35.038 --rc geninfo_unexecuted_blocks=1 00:14:35.038 00:14:35.038 ' 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:35.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:35.038 21:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:41.602 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:41.602 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:41.602 Found net devices under 0000:86:00.0: cvl_0_0 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:41.602 Found net devices under 0000:86:00.1: cvl_0_1 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.602 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:41.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:14:41.603 00:14:41.603 --- 10.0.0.2 ping statistics --- 00:14:41.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.603 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:14:41.603 00:14:41.603 --- 10.0.0.1 ping statistics --- 00:14:41.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.603 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1270138 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1270138 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1270138 ']' 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:41.603 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.603 [2024-12-05 21:07:48.812391] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:14:41.603 [2024-12-05 21:07:48.812441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.603 [2024-12-05 21:07:48.893037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:41.603 [2024-12-05 21:07:48.936477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.603 [2024-12-05 21:07:48.936516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.603 [2024-12-05 21:07:48.936523] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.603 [2024-12-05 21:07:48.936529] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.603 [2024-12-05 21:07:48.936534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.603 [2024-12-05 21:07:48.938061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.603 [2024-12-05 21:07:48.938173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.603 [2024-12-05 21:07:48.938283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.603 [2024-12-05 21:07:48.938283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.603 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:41.603 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:41.603 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:41.603 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:41.603 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.603 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.603 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:41.603 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.603 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.861 [2024-12-05 21:07:49.710717] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.861 Malloc0 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.861 Malloc1 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.861 [2024-12-05 21:07:49.805766] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:41.861 00:14:41.861 Discovery Log Number of Records 2, Generation counter 2 00:14:41.861 =====Discovery Log Entry 0====== 00:14:41.861 trtype: tcp 00:14:41.861 adrfam: ipv4 00:14:41.861 subtype: current discovery subsystem 00:14:41.861 treq: not required 00:14:41.861 portid: 0 00:14:41.861 trsvcid: 4420 00:14:41.861 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:41.861 traddr: 10.0.0.2 00:14:41.861 eflags: explicit discovery connections, duplicate discovery information 00:14:41.861 sectype: none 00:14:41.861 =====Discovery Log Entry 1====== 00:14:41.861 trtype: tcp 00:14:41.861 adrfam: ipv4 00:14:41.861 subtype: nvme subsystem 00:14:41.861 treq: not required 00:14:41.861 portid: 0 00:14:41.861 trsvcid: 4420 00:14:41.861 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:41.861 traddr: 10.0.0.2 00:14:41.861 eflags: none 00:14:41.861 sectype: none 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:41.861 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.118 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:42.118 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:43.488 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:43.488 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:43.488 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:43.488 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:43.488 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:43.488 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:45.382 /dev/nvme0n2 ]] 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:45.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:45.382 rmmod nvme_tcp 00:14:45.382 rmmod nvme_fabrics 00:14:45.382 rmmod nvme_keyring 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1270138 ']' 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1270138 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1270138 ']' 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1270138 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.382 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1270138 00:14:45.383 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:45.383 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:45.383 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1270138' 00:14:45.383 killing process with pid 1270138 00:14:45.383 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1270138 00:14:45.383 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1270138 00:14:45.641 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:45.641 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:45.641 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:45.641 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:45.641 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:45.641 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:45.641 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:45.641 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:45.641 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:45.641 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.641 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.641 21:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.172 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:48.172 00:14:48.172 real 0m13.144s 00:14:48.172 user 0m20.740s 00:14:48.172 sys 0m5.143s 00:14:48.172 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.173 ************************************ 00:14:48.173 END TEST nvmf_nvme_cli 00:14:48.173 ************************************ 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:48.173 ************************************ 00:14:48.173 START TEST nvmf_vfio_user 00:14:48.173 ************************************ 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:48.173 * Looking for test storage... 00:14:48.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:48.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.173 --rc genhtml_branch_coverage=1 00:14:48.173 --rc genhtml_function_coverage=1 00:14:48.173 --rc genhtml_legend=1 00:14:48.173 --rc geninfo_all_blocks=1 00:14:48.173 --rc geninfo_unexecuted_blocks=1 00:14:48.173 00:14:48.173 ' 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:48.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.173 --rc genhtml_branch_coverage=1 00:14:48.173 --rc genhtml_function_coverage=1 00:14:48.173 --rc genhtml_legend=1 00:14:48.173 --rc geninfo_all_blocks=1 00:14:48.173 --rc geninfo_unexecuted_blocks=1 00:14:48.173 00:14:48.173 ' 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:48.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.173 --rc genhtml_branch_coverage=1 00:14:48.173 --rc genhtml_function_coverage=1 00:14:48.173 --rc genhtml_legend=1 00:14:48.173 --rc geninfo_all_blocks=1 00:14:48.173 --rc geninfo_unexecuted_blocks=1 00:14:48.173 00:14:48.173 ' 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:48.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.173 --rc genhtml_branch_coverage=1 00:14:48.173 --rc genhtml_function_coverage=1 00:14:48.173 --rc genhtml_legend=1 00:14:48.173 --rc geninfo_all_blocks=1 00:14:48.173 --rc geninfo_unexecuted_blocks=1 00:14:48.173 00:14:48.173 ' 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.173 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.173 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:48.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1271441 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1271441' 00:14:48.174 Process pid: 1271441 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1271441 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1271441 ']' 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.174 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:48.174 [2024-12-05 21:07:56.082698] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:14:48.174 [2024-12-05 21:07:56.082748] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.174 [2024-12-05 21:07:56.156821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:48.174 [2024-12-05 21:07:56.195924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.174 [2024-12-05 21:07:56.195965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.174 [2024-12-05 21:07:56.195972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.174 [2024-12-05 21:07:56.195978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.174 [2024-12-05 21:07:56.195983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.174 [2024-12-05 21:07:56.197595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.174 [2024-12-05 21:07:56.197709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.174 [2024-12-05 21:07:56.197794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.174 [2024-12-05 21:07:56.197793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.431 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.432 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:48.432 21:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:49.365 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:49.623 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:49.623 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:49.623 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:49.623 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:49.623 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:49.623 Malloc1 00:14:49.880 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:49.880 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:50.138 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:50.396 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:50.396 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:50.396 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:50.653 Malloc2 00:14:50.653 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:50.653 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:50.911 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:51.170 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:51.170 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:51.170 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:51.170 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:51.170 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:51.170 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:51.170 [2024-12-05 21:07:59.166147] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:14:51.170 [2024-12-05 21:07:59.166193] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1271926 ] 00:14:51.170 [2024-12-05 21:07:59.203888] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:51.170 [2024-12-05 21:07:59.212657] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:51.170 [2024-12-05 21:07:59.212680] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7cc9d77000 00:14:51.171 [2024-12-05 21:07:59.213656] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:51.171 [2024-12-05 21:07:59.214660] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:51.171 [2024-12-05 21:07:59.215665] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:51.171 [2024-12-05 21:07:59.216668] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:51.171 [2024-12-05 21:07:59.217673] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:51.171 [2024-12-05 21:07:59.218676] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:51.171 [2024-12-05 21:07:59.219688] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:51.171 [2024-12-05 21:07:59.220694] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:51.171 [2024-12-05 21:07:59.221704] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:51.171 [2024-12-05 21:07:59.221713] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7cc9d6c000 00:14:51.171 [2024-12-05 21:07:59.222627] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:51.171 [2024-12-05 21:07:59.236214] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:51.171 [2024-12-05 21:07:59.236242] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:51.171 [2024-12-05 21:07:59.238817] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:51.171 [2024-12-05 21:07:59.238849] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:51.171 [2024-12-05 21:07:59.238915] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:51.171 [2024-12-05 21:07:59.238930] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:51.171 [2024-12-05 21:07:59.238936] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:51.171 [2024-12-05 21:07:59.239800] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:51.171 [2024-12-05 21:07:59.239808] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:51.171 [2024-12-05 21:07:59.239815] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:51.171 [2024-12-05 21:07:59.240822] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:51.171 [2024-12-05 21:07:59.240831] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:51.171 [2024-12-05 21:07:59.240837] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:51.171 [2024-12-05 21:07:59.241811] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:51.171 [2024-12-05 21:07:59.241819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:51.171 [2024-12-05 21:07:59.242817] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:51.171 [2024-12-05 21:07:59.242824] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:51.171 [2024-12-05 21:07:59.242829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:51.171 [2024-12-05 21:07:59.242835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:51.171 [2024-12-05 21:07:59.242942] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:51.171 [2024-12-05 21:07:59.242947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:51.171 [2024-12-05 21:07:59.242951] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:51.171 [2024-12-05 21:07:59.243825] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:51.171 [2024-12-05 21:07:59.244834] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:51.171 [2024-12-05 21:07:59.245837] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:51.171 [2024-12-05 21:07:59.246837] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:51.171 [2024-12-05 21:07:59.246914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:51.171 [2024-12-05 21:07:59.247850] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:51.171 [2024-12-05 21:07:59.247858] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:51.171 [2024-12-05 21:07:59.247864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:51.171 [2024-12-05 21:07:59.247881] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:51.171 [2024-12-05 21:07:59.247890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:51.171 [2024-12-05 21:07:59.247907] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:51.171 [2024-12-05 21:07:59.247911] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:51.171 [2024-12-05 21:07:59.247915] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:51.171 [2024-12-05 21:07:59.247927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:51.171 [2024-12-05 21:07:59.247987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:51.171 [2024-12-05 21:07:59.247996] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:51.171 [2024-12-05 21:07:59.248001] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:51.171 [2024-12-05 21:07:59.248006] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:51.171 [2024-12-05 21:07:59.248010] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:51.171 [2024-12-05 21:07:59.248014] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:51.171 [2024-12-05 21:07:59.248018] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:51.171 [2024-12-05 21:07:59.248022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:51.171 [2024-12-05 21:07:59.248029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:51.171 [2024-12-05 21:07:59.248038] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:51.171 [2024-12-05 21:07:59.248050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:51.171 [2024-12-05 21:07:59.248060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.171 [2024-12-05 21:07:59.248067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.171 [2024-12-05 21:07:59.248075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.171 [2024-12-05 21:07:59.248082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.171 [2024-12-05 21:07:59.248086] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:51.171 [2024-12-05 21:07:59.248094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:51.171 [2024-12-05 21:07:59.248102] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:51.171 [2024-12-05 21:07:59.248113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:51.171 [2024-12-05 21:07:59.248118] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:51.171 [2024-12-05 21:07:59.248123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:51.171 [2024-12-05 21:07:59.248129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:51.171 [2024-12-05 21:07:59.248134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:51.171 [2024-12-05 21:07:59.248141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:51.171 [2024-12-05 21:07:59.248156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:51.171 [2024-12-05 21:07:59.248205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:51.171 [2024-12-05 21:07:59.248212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:51.171 [2024-12-05 21:07:59.248219] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:51.171 [2024-12-05 21:07:59.248223] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:51.171 [2024-12-05 21:07:59.248226] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:51.172 [2024-12-05 21:07:59.248231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:51.172 [2024-12-05 21:07:59.248244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:51.172 [2024-12-05 21:07:59.248253] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:51.172 [2024-12-05 21:07:59.248261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:51.172 [2024-12-05 21:07:59.248268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:51.172 [2024-12-05 21:07:59.248274] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:51.172 [2024-12-05 21:07:59.248278] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:51.172 [2024-12-05 21:07:59.248281] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:51.172 [2024-12-05 21:07:59.248287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:51.172 [2024-12-05 21:07:59.248310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:51.172 [2024-12-05 21:07:59.248319] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:51.172 [2024-12-05 21:07:59.248326] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:51.172 [2024-12-05 21:07:59.248332] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:51.172 [2024-12-05 21:07:59.248336] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:51.172 [2024-12-05 21:07:59.248341] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:51.172 [2024-12-05 21:07:59.248347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:51.172 [2024-12-05 21:07:59.248361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:51.172 [2024-12-05 21:07:59.248374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:51.172 [2024-12-05 21:07:59.248380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:51.172 [2024-12-05 21:07:59.248388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:51.172 [2024-12-05 21:07:59.248393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:51.172 [2024-12-05 21:07:59.248398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:51.172 [2024-12-05 21:07:59.248403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:51.172 [2024-12-05 21:07:59.248407] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:51.172 [2024-12-05 21:07:59.248411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:51.172 [2024-12-05 21:07:59.248416] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:51.172 [2024-12-05 21:07:59.248432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:51.172 [2024-12-05 21:07:59.248443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:51.172 [2024-12-05 21:07:59.248453] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:51.172 [2024-12-05 21:07:59.248461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:51.172 [2024-12-05 21:07:59.248470] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:51.172 [2024-12-05 21:07:59.248482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:51.172 [2024-12-05 21:07:59.248492] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:51.172 [2024-12-05 21:07:59.248502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:51.172 [2024-12-05 21:07:59.248515] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:51.172 [2024-12-05 21:07:59.248520] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:51.172 [2024-12-05 21:07:59.248523] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:51.172 [2024-12-05 21:07:59.248526] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:51.172 [2024-12-05 21:07:59.248529] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:51.172 [2024-12-05 21:07:59.248535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:51.172 [2024-12-05 21:07:59.248542] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:51.172 [2024-12-05 21:07:59.248547] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:51.172 [2024-12-05 21:07:59.248550] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:51.172 [2024-12-05 21:07:59.248555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:51.172 [2024-12-05 21:07:59.248561] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:51.172 [2024-12-05 21:07:59.248565] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:51.172 [2024-12-05 21:07:59.248568] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:51.172 [2024-12-05 21:07:59.248573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:51.172 [2024-12-05 21:07:59.248580] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:51.172 [2024-12-05 21:07:59.248584] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:51.172 [2024-12-05 21:07:59.248587] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:51.172 [2024-12-05 21:07:59.248592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:51.172 [2024-12-05 21:07:59.248598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:51.172 [2024-12-05 21:07:59.248608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:51.172 [2024-12-05 21:07:59.248617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:51.172 [2024-12-05 21:07:59.248624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:51.172 ===================================================== 00:14:51.172 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:51.172 ===================================================== 00:14:51.172 Controller Capabilities/Features 00:14:51.172 ================================ 00:14:51.172 Vendor ID: 4e58 00:14:51.172 Subsystem Vendor ID: 4e58 00:14:51.172 Serial Number: SPDK1 00:14:51.172 Model Number: SPDK bdev Controller 00:14:51.172 Firmware Version: 25.01 00:14:51.172 Recommended Arb Burst: 6 00:14:51.172 IEEE OUI Identifier: 8d 6b 50 00:14:51.172 Multi-path I/O 00:14:51.172 May have multiple subsystem ports: Yes 00:14:51.172 May have multiple controllers: Yes 00:14:51.172 Associated with SR-IOV VF: No 00:14:51.172 Max Data Transfer Size: 131072 00:14:51.172 Max Number of Namespaces: 32 00:14:51.172 Max Number of I/O Queues: 127 00:14:51.172 NVMe Specification Version (VS): 1.3 00:14:51.172 NVMe Specification Version (Identify): 1.3 00:14:51.172 Maximum Queue Entries: 256 00:14:51.172 Contiguous Queues Required: Yes 00:14:51.172 Arbitration Mechanisms Supported 00:14:51.172 Weighted Round Robin: Not Supported 00:14:51.172 Vendor Specific: Not Supported 00:14:51.172 Reset Timeout: 15000 ms 00:14:51.172 Doorbell Stride: 4 bytes 00:14:51.172 NVM Subsystem Reset: Not Supported 00:14:51.172 Command Sets Supported 00:14:51.172 NVM Command Set: Supported 00:14:51.172 Boot Partition: Not Supported 00:14:51.172 Memory Page Size Minimum: 4096 bytes 00:14:51.172 Memory Page Size Maximum: 4096 bytes 00:14:51.172 Persistent Memory Region: Not Supported 00:14:51.172 Optional Asynchronous Events Supported 00:14:51.172 Namespace Attribute Notices: Supported 00:14:51.172 Firmware Activation Notices: Not Supported 00:14:51.172 ANA Change Notices: Not Supported 00:14:51.172 PLE Aggregate Log Change Notices: Not Supported 00:14:51.172 LBA Status Info Alert Notices: Not Supported 00:14:51.172 EGE Aggregate Log Change Notices: Not Supported 00:14:51.172 Normal NVM Subsystem Shutdown event: Not Supported 00:14:51.172 Zone Descriptor Change Notices: Not Supported 00:14:51.172 Discovery Log Change Notices: Not Supported 00:14:51.172 Controller Attributes 00:14:51.172 128-bit Host Identifier: Supported 00:14:51.172 Non-Operational Permissive Mode: Not Supported 00:14:51.172 NVM Sets: Not Supported 00:14:51.172 Read Recovery Levels: Not Supported 00:14:51.172 Endurance Groups: Not Supported 00:14:51.172 Predictable Latency Mode: Not Supported 00:14:51.172 Traffic Based Keep ALive: Not Supported 00:14:51.172 Namespace Granularity: Not Supported 00:14:51.172 SQ Associations: Not Supported 00:14:51.172 UUID List: Not Supported 00:14:51.172 Multi-Domain Subsystem: Not Supported 00:14:51.172 Fixed Capacity Management: Not Supported 00:14:51.173 Variable Capacity Management: Not Supported 00:14:51.173 Delete Endurance Group: Not Supported 00:14:51.173 Delete NVM Set: Not Supported 00:14:51.173 Extended LBA Formats Supported: Not Supported 00:14:51.173 Flexible Data Placement Supported: Not Supported 00:14:51.173 00:14:51.173 Controller Memory Buffer Support 00:14:51.173 ================================ 00:14:51.173 Supported: No 00:14:51.173 00:14:51.173 Persistent Memory Region Support 00:14:51.173 ================================ 00:14:51.173 Supported: No 00:14:51.173 00:14:51.173 Admin Command Set Attributes 00:14:51.173 ============================ 00:14:51.173 Security Send/Receive: Not Supported 00:14:51.173 Format NVM: Not Supported 00:14:51.173 Firmware Activate/Download: Not Supported 00:14:51.173 Namespace Management: Not Supported 00:14:51.173 Device Self-Test: Not Supported 00:14:51.173 Directives: Not Supported 00:14:51.173 NVMe-MI: Not Supported 00:14:51.173 Virtualization Management: Not Supported 00:14:51.173 Doorbell Buffer Config: Not Supported 00:14:51.173 Get LBA Status Capability: Not Supported 00:14:51.173 Command & Feature Lockdown Capability: Not Supported 00:14:51.173 Abort Command Limit: 4 00:14:51.173 Async Event Request Limit: 4 00:14:51.173 Number of Firmware Slots: N/A 00:14:51.173 Firmware Slot 1 Read-Only: N/A 00:14:51.173 Firmware Activation Without Reset: N/A 00:14:51.173 Multiple Update Detection Support: N/A 00:14:51.173 Firmware Update Granularity: No Information Provided 00:14:51.173 Per-Namespace SMART Log: No 00:14:51.173 Asymmetric Namespace Access Log Page: Not Supported 00:14:51.173 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:51.173 Command Effects Log Page: Supported 00:14:51.173 Get Log Page Extended Data: Supported 00:14:51.173 Telemetry Log Pages: Not Supported 00:14:51.173 Persistent Event Log Pages: Not Supported 00:14:51.173 Supported Log Pages Log Page: May Support 00:14:51.173 Commands Supported & Effects Log Page: Not Supported 00:14:51.173 Feature Identifiers & Effects Log Page:May Support 00:14:51.173 NVMe-MI Commands & Effects Log Page: May Support 00:14:51.173 Data Area 4 for Telemetry Log: Not Supported 00:14:51.173 Error Log Page Entries Supported: 128 00:14:51.173 Keep Alive: Supported 00:14:51.173 Keep Alive Granularity: 10000 ms 00:14:51.173 00:14:51.173 NVM Command Set Attributes 00:14:51.173 ========================== 00:14:51.173 Submission Queue Entry Size 00:14:51.173 Max: 64 00:14:51.173 Min: 64 00:14:51.173 Completion Queue Entry Size 00:14:51.173 Max: 16 00:14:51.173 Min: 16 00:14:51.173 Number of Namespaces: 32 00:14:51.173 Compare Command: Supported 00:14:51.173 Write Uncorrectable Command: Not Supported 00:14:51.173 Dataset Management Command: Supported 00:14:51.173 Write Zeroes Command: Supported 00:14:51.173 Set Features Save Field: Not Supported 00:14:51.173 Reservations: Not Supported 00:14:51.173 Timestamp: Not Supported 00:14:51.173 Copy: Supported 00:14:51.173 Volatile Write Cache: Present 00:14:51.173 Atomic Write Unit (Normal): 1 00:14:51.173 Atomic Write Unit (PFail): 1 00:14:51.173 Atomic Compare & Write Unit: 1 00:14:51.173 Fused Compare & Write: Supported 00:14:51.173 Scatter-Gather List 00:14:51.173 SGL Command Set: Supported (Dword aligned) 00:14:51.173 SGL Keyed: Not Supported 00:14:51.173 SGL Bit Bucket Descriptor: Not Supported 00:14:51.173 SGL Metadata Pointer: Not Supported 00:14:51.173 Oversized SGL: Not Supported 00:14:51.173 SGL Metadata Address: Not Supported 00:14:51.173 SGL Offset: Not Supported 00:14:51.173 Transport SGL Data Block: Not Supported 00:14:51.173 Replay Protected Memory Block: Not Supported 00:14:51.173 00:14:51.173 Firmware Slot Information 00:14:51.173 ========================= 00:14:51.173 Active slot: 1 00:14:51.173 Slot 1 Firmware Revision: 25.01 00:14:51.173 00:14:51.173 00:14:51.173 Commands Supported and Effects 00:14:51.173 ============================== 00:14:51.173 Admin Commands 00:14:51.173 -------------- 00:14:51.173 Get Log Page (02h): Supported 00:14:51.173 Identify (06h): Supported 00:14:51.173 Abort (08h): Supported 00:14:51.173 Set Features (09h): Supported 00:14:51.173 Get Features (0Ah): Supported 00:14:51.173 Asynchronous Event Request (0Ch): Supported 00:14:51.173 Keep Alive (18h): Supported 00:14:51.173 I/O Commands 00:14:51.173 ------------ 00:14:51.173 Flush (00h): Supported LBA-Change 00:14:51.173 Write (01h): Supported LBA-Change 00:14:51.173 Read (02h): Supported 00:14:51.173 Compare (05h): Supported 00:14:51.173 Write Zeroes (08h): Supported LBA-Change 00:14:51.173 Dataset Management (09h): Supported LBA-Change 00:14:51.173 Copy (19h): Supported LBA-Change 00:14:51.173 00:14:51.173 Error Log 00:14:51.173 ========= 00:14:51.173 00:14:51.173 Arbitration 00:14:51.173 =========== 00:14:51.173 Arbitration Burst: 1 00:14:51.173 00:14:51.173 Power Management 00:14:51.173 ================ 00:14:51.173 Number of Power States: 1 00:14:51.173 Current Power State: Power State #0 00:14:51.173 Power State #0: 00:14:51.173 Max Power: 0.00 W 00:14:51.173 Non-Operational State: Operational 00:14:51.173 Entry Latency: Not Reported 00:14:51.173 Exit Latency: Not Reported 00:14:51.173 Relative Read Throughput: 0 00:14:51.173 Relative Read Latency: 0 00:14:51.173 Relative Write Throughput: 0 00:14:51.173 Relative Write Latency: 0 00:14:51.173 Idle Power: Not Reported 00:14:51.173 Active Power: Not Reported 00:14:51.173 Non-Operational Permissive Mode: Not Supported 00:14:51.173 00:14:51.173 Health Information 00:14:51.173 ================== 00:14:51.173 Critical Warnings: 00:14:51.173 Available Spare Space: OK 00:14:51.173 Temperature: OK 00:14:51.173 Device Reliability: OK 00:14:51.173 Read Only: No 00:14:51.173 Volatile Memory Backup: OK 00:14:51.173 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:51.173 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:51.173 Available Spare: 0% 00:14:51.173 Available Sp[2024-12-05 21:07:59.248707] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:51.173 [2024-12-05 21:07:59.248716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:51.173 [2024-12-05 21:07:59.248741] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:51.173 [2024-12-05 21:07:59.248749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.173 [2024-12-05 21:07:59.248755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.173 [2024-12-05 21:07:59.248761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.173 [2024-12-05 21:07:59.248766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.173 [2024-12-05 21:07:59.251374] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:51.173 [2024-12-05 21:07:59.251385] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:51.173 [2024-12-05 21:07:59.251882] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:51.173 [2024-12-05 21:07:59.251932] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:51.173 [2024-12-05 21:07:59.251938] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:51.173 [2024-12-05 21:07:59.252875] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:51.173 [2024-12-05 21:07:59.252885] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:51.173 [2024-12-05 21:07:59.252934] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:51.173 [2024-12-05 21:07:59.253906] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:51.431 are Threshold: 0% 00:14:51.431 Life Percentage Used: 0% 00:14:51.431 Data Units Read: 0 00:14:51.431 Data Units Written: 0 00:14:51.431 Host Read Commands: 0 00:14:51.431 Host Write Commands: 0 00:14:51.431 Controller Busy Time: 0 minutes 00:14:51.431 Power Cycles: 0 00:14:51.431 Power On Hours: 0 hours 00:14:51.431 Unsafe Shutdowns: 0 00:14:51.431 Unrecoverable Media Errors: 0 00:14:51.431 Lifetime Error Log Entries: 0 00:14:51.431 Warning Temperature Time: 0 minutes 00:14:51.431 Critical Temperature Time: 0 minutes 00:14:51.431 00:14:51.431 Number of Queues 00:14:51.431 ================ 00:14:51.431 Number of I/O Submission Queues: 127 00:14:51.431 Number of I/O Completion Queues: 127 00:14:51.431 00:14:51.431 Active Namespaces 00:14:51.431 ================= 00:14:51.431 Namespace ID:1 00:14:51.431 Error Recovery Timeout: Unlimited 00:14:51.431 Command Set Identifier: NVM (00h) 00:14:51.431 Deallocate: Supported 00:14:51.431 Deallocated/Unwritten Error: Not Supported 00:14:51.431 Deallocated Read Value: Unknown 00:14:51.431 Deallocate in Write Zeroes: Not Supported 00:14:51.431 Deallocated Guard Field: 0xFFFF 00:14:51.431 Flush: Supported 00:14:51.431 Reservation: Supported 00:14:51.431 Namespace Sharing Capabilities: Multiple Controllers 00:14:51.431 Size (in LBAs): 131072 (0GiB) 00:14:51.431 Capacity (in LBAs): 131072 (0GiB) 00:14:51.431 Utilization (in LBAs): 131072 (0GiB) 00:14:51.431 NGUID: 2500D999EC044B069C8F93A2DF07A427 00:14:51.431 UUID: 2500d999-ec04-4b06-9c8f-93a2df07a427 00:14:51.431 Thin Provisioning: Not Supported 00:14:51.431 Per-NS Atomic Units: Yes 00:14:51.431 Atomic Boundary Size (Normal): 0 00:14:51.431 Atomic Boundary Size (PFail): 0 00:14:51.431 Atomic Boundary Offset: 0 00:14:51.431 Maximum Single Source Range Length: 65535 00:14:51.432 Maximum Copy Length: 65535 00:14:51.432 Maximum Source Range Count: 1 00:14:51.432 NGUID/EUI64 Never Reused: No 00:14:51.432 Namespace Write Protected: No 00:14:51.432 Number of LBA Formats: 1 00:14:51.432 Current LBA Format: LBA Format #00 00:14:51.432 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:51.432 00:14:51.432 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:51.432 [2024-12-05 21:07:59.491233] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:56.688 Initializing NVMe Controllers 00:14:56.688 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:56.688 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:56.688 Initialization complete. Launching workers. 00:14:56.688 ======================================================== 00:14:56.688 Latency(us) 00:14:56.688 Device Information : IOPS MiB/s Average min max 00:14:56.688 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39905.65 155.88 3207.17 964.78 6955.05 00:14:56.688 ======================================================== 00:14:56.688 Total : 39905.65 155.88 3207.17 964.78 6955.05 00:14:56.688 00:14:56.688 [2024-12-05 21:08:04.510151] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:56.688 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:56.688 [2024-12-05 21:08:04.744241] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:01.944 Initializing NVMe Controllers 00:15:01.944 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:01.944 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:01.944 Initialization complete. Launching workers. 00:15:01.944 ======================================================== 00:15:01.944 Latency(us) 00:15:01.944 Device Information : IOPS MiB/s Average min max 00:15:01.944 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7997.32 3991.54 15966.51 00:15:01.944 ======================================================== 00:15:01.944 Total : 16025.60 62.60 7997.32 3991.54 15966.51 00:15:01.944 00:15:01.944 [2024-12-05 21:08:09.778479] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:01.944 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:01.944 [2024-12-05 21:08:09.994456] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:07.200 [2024-12-05 21:08:15.053598] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:07.200 Initializing NVMe Controllers 00:15:07.200 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:07.200 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:07.200 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:07.200 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:07.200 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:07.200 Initialization complete. Launching workers. 00:15:07.200 Starting thread on core 2 00:15:07.200 Starting thread on core 3 00:15:07.200 Starting thread on core 1 00:15:07.200 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:07.457 [2024-12-05 21:08:15.346080] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:10.733 [2024-12-05 21:08:18.408806] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:10.733 Initializing NVMe Controllers 00:15:10.733 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:10.733 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:10.733 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:10.733 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:10.733 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:10.733 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:10.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:10.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:10.733 Initialization complete. Launching workers. 00:15:10.733 Starting thread on core 1 with urgent priority queue 00:15:10.733 Starting thread on core 2 with urgent priority queue 00:15:10.733 Starting thread on core 3 with urgent priority queue 00:15:10.733 Starting thread on core 0 with urgent priority queue 00:15:10.733 SPDK bdev Controller (SPDK1 ) core 0: 8550.00 IO/s 11.70 secs/100000 ios 00:15:10.733 SPDK bdev Controller (SPDK1 ) core 1: 9997.00 IO/s 10.00 secs/100000 ios 00:15:10.733 SPDK bdev Controller (SPDK1 ) core 2: 8216.33 IO/s 12.17 secs/100000 ios 00:15:10.733 SPDK bdev Controller (SPDK1 ) core 3: 7972.00 IO/s 12.54 secs/100000 ios 00:15:10.733 ======================================================== 00:15:10.733 00:15:10.733 21:08:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:10.733 [2024-12-05 21:08:18.696832] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:10.733 Initializing NVMe Controllers 00:15:10.733 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:10.733 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:10.733 Namespace ID: 1 size: 0GB 00:15:10.733 Initialization complete. 00:15:10.733 INFO: using host memory buffer for IO 00:15:10.733 Hello world! 00:15:10.733 [2024-12-05 21:08:18.731065] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:10.733 21:08:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:10.990 [2024-12-05 21:08:19.009774] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.922 Initializing NVMe Controllers 00:15:11.922 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.922 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.922 Initialization complete. Launching workers. 00:15:11.922 submit (in ns) avg, min, max = 5435.2, 3147.6, 3999001.0 00:15:11.922 complete (in ns) avg, min, max = 21259.9, 1723.8, 5990562.9 00:15:11.922 00:15:11.922 Submit histogram 00:15:11.922 ================ 00:15:11.922 Range in us Cumulative Count 00:15:11.922 3.139 - 3.154: 0.0060% ( 1) 00:15:11.922 3.154 - 3.170: 0.0302% ( 4) 00:15:11.922 3.170 - 3.185: 0.0846% ( 9) 00:15:11.922 3.185 - 3.200: 0.1511% ( 11) 00:15:11.922 3.200 - 3.215: 0.6347% ( 80) 00:15:11.922 3.215 - 3.230: 2.9558% ( 384) 00:15:11.922 3.230 - 3.246: 7.8820% ( 815) 00:15:11.922 3.246 - 3.261: 13.6545% ( 955) 00:15:11.922 3.261 - 3.276: 19.7957% ( 1016) 00:15:11.922 3.276 - 3.291: 26.1545% ( 1052) 00:15:11.922 3.291 - 3.307: 31.9572% ( 960) 00:15:11.922 3.307 - 3.322: 37.4637% ( 911) 00:15:11.922 3.322 - 3.337: 43.4417% ( 989) 00:15:11.922 3.337 - 3.352: 49.4197% ( 989) 00:15:11.922 3.352 - 3.368: 54.8175% ( 893) 00:15:11.922 3.368 - 3.383: 61.9500% ( 1180) 00:15:11.922 3.383 - 3.398: 68.9978% ( 1166) 00:15:11.922 3.398 - 3.413: 74.2686% ( 872) 00:15:11.922 3.413 - 3.429: 79.3218% ( 836) 00:15:11.922 3.429 - 3.444: 82.5798% ( 539) 00:15:11.922 3.444 - 3.459: 84.9553% ( 393) 00:15:11.922 3.459 - 3.474: 86.2065% ( 207) 00:15:11.922 3.474 - 3.490: 87.0044% ( 132) 00:15:11.922 3.490 - 3.505: 87.3852% ( 63) 00:15:11.922 3.505 - 3.520: 87.7599% ( 62) 00:15:11.922 3.520 - 3.535: 88.3946% ( 105) 00:15:11.922 3.535 - 3.550: 89.2771% ( 146) 00:15:11.922 3.550 - 3.566: 90.1475% ( 144) 00:15:11.922 3.566 - 3.581: 91.1750% ( 170) 00:15:11.922 3.581 - 3.596: 92.1422% ( 160) 00:15:11.922 3.596 - 3.611: 93.1032% ( 159) 00:15:11.922 3.611 - 3.627: 94.1550% ( 174) 00:15:11.922 3.627 - 3.642: 95.2551% ( 182) 00:15:11.922 3.642 - 3.657: 96.2706% ( 168) 00:15:11.922 3.657 - 3.672: 97.1047% ( 138) 00:15:11.922 3.672 - 3.688: 97.7031% ( 99) 00:15:11.922 3.688 - 3.703: 98.2531% ( 91) 00:15:11.922 3.703 - 3.718: 98.6339% ( 63) 00:15:11.922 3.718 - 3.733: 98.9422% ( 51) 00:15:11.922 3.733 - 3.749: 99.1840% ( 40) 00:15:11.922 3.749 - 3.764: 99.3412% ( 26) 00:15:11.922 3.764 - 3.779: 99.4620% ( 20) 00:15:11.922 3.779 - 3.794: 99.5104% ( 8) 00:15:11.922 3.794 - 3.810: 99.5769% ( 11) 00:15:11.922 3.810 - 3.825: 99.6071% ( 5) 00:15:11.922 3.825 - 3.840: 99.6313% ( 4) 00:15:11.922 3.840 - 3.855: 99.6373% ( 1) 00:15:11.922 3.855 - 3.870: 99.6494% ( 2) 00:15:11.922 3.870 - 3.886: 99.6615% ( 2) 00:15:11.922 3.886 - 3.901: 99.6676% ( 1) 00:15:11.922 3.901 - 3.931: 99.6796% ( 2) 00:15:11.922 3.931 - 3.962: 99.6917% ( 2) 00:15:11.922 3.962 - 3.992: 99.6978% ( 1) 00:15:11.922 4.023 - 4.053: 99.7038% ( 1) 00:15:11.922 5.029 - 5.059: 99.7099% ( 1) 00:15:11.922 5.120 - 5.150: 99.7159% ( 1) 00:15:11.922 5.516 - 5.547: 99.7220% ( 1) 00:15:11.922 5.577 - 5.608: 99.7280% ( 1) 00:15:11.922 5.608 - 5.638: 99.7340% ( 1) 00:15:11.922 5.760 - 5.790: 99.7401% ( 1) 00:15:11.922 5.790 - 5.821: 99.7522% ( 2) 00:15:11.922 5.851 - 5.882: 99.7582% ( 1) 00:15:11.922 5.882 - 5.912: 99.7643% ( 1) 00:15:11.922 5.943 - 5.973: 99.7703% ( 1) 00:15:11.922 6.217 - 6.248: 99.7764% ( 1) 00:15:11.922 6.248 - 6.278: 99.7824% ( 1) 00:15:11.922 6.430 - 6.461: 99.8005% ( 3) 00:15:11.922 6.461 - 6.491: 99.8126% ( 2) 00:15:11.922 6.613 - 6.644: 99.8187% ( 1) 00:15:11.922 6.644 - 6.674: 99.8247% ( 1) 00:15:11.922 6.857 - 6.888: 99.8308% ( 1) 00:15:11.922 6.949 - 6.979: 99.8428% ( 2) 00:15:11.922 7.040 - 7.070: 99.8489% ( 1) 00:15:11.922 7.162 - 7.192: 99.8549% ( 1) 00:15:11.922 7.223 - 7.253: 99.8610% ( 1) 00:15:11.922 7.253 - 7.284: 99.8670% ( 1) 00:15:11.922 7.314 - 7.345: 99.8791% ( 2) 00:15:11.922 7.345 - 7.375: 99.8852% ( 1) 00:15:11.922 7.375 - 7.406: 99.8912% ( 1) 00:15:12.180 [2024-12-05 21:08:20.031876] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:12.180 7.406 - 7.436: 99.8972% ( 1) 00:15:12.180 7.467 - 7.497: 99.9033% ( 1) 00:15:12.180 7.497 - 7.528: 99.9093% ( 1) 00:15:12.180 7.650 - 7.680: 99.9154% ( 1) 00:15:12.180 7.680 - 7.710: 99.9214% ( 1) 00:15:12.180 8.533 - 8.594: 99.9275% ( 1) 00:15:12.180 9.996 - 10.057: 99.9335% ( 1) 00:15:12.180 13.592 - 13.653: 99.9396% ( 1) 00:15:12.180 14.019 - 14.080: 99.9456% ( 1) 00:15:12.180 2012.891 - 2028.495: 99.9516% ( 1) 00:15:12.180 3994.575 - 4025.783: 100.0000% ( 8) 00:15:12.180 00:15:12.180 Complete histogram 00:15:12.180 ================== 00:15:12.180 Range in us Cumulative Count 00:15:12.180 1.722 - 1.730: 0.0302% ( 5) 00:15:12.180 1.730 - 1.737: 0.1390% ( 18) 00:15:12.180 1.737 - 1.745: 0.2236% ( 14) 00:15:12.180 1.745 - 1.752: 0.2539% ( 5) 00:15:12.180 1.752 - 1.760: 0.2599% ( 1) 00:15:12.180 1.760 - 1.768: 0.2780% ( 3) 00:15:12.180 1.768 - 1.775: 0.8523% ( 95) 00:15:12.180 1.775 - 1.783: 4.8356% ( 659) 00:15:12.180 1.783 - 1.790: 12.5544% ( 1277) 00:15:12.180 1.790 - 1.798: 17.0636% ( 746) 00:15:12.180 1.798 - 1.806: 18.8407% ( 294) 00:15:12.180 1.806 - 1.813: 19.7050% ( 143) 00:15:12.180 1.813 - 1.821: 20.2551% ( 91) 00:15:12.180 1.821 - 1.829: 21.7178% ( 242) 00:15:12.180 1.829 - 1.836: 33.0694% ( 1878) 00:15:12.180 1.836 - 1.844: 61.8835% ( 4767) 00:15:12.180 1.844 - 1.851: 82.9908% ( 3492) 00:15:12.180 1.851 - 1.859: 89.7667% ( 1121) 00:15:12.180 1.859 - 1.867: 92.3114% ( 421) 00:15:12.180 1.867 - 1.874: 94.0401% ( 286) 00:15:12.180 1.874 - 1.882: 94.7413% ( 116) 00:15:12.180 1.882 - 1.890: 94.8985% ( 26) 00:15:12.180 1.890 - 1.897: 95.0677% ( 28) 00:15:12.180 1.897 - 1.905: 95.5089% ( 73) 00:15:12.180 1.905 - 1.912: 96.3189% ( 134) 00:15:12.180 1.912 - 1.920: 97.3465% ( 170) 00:15:12.180 1.920 - 1.928: 98.0899% ( 123) 00:15:12.180 1.928 - 1.935: 98.4768% ( 64) 00:15:12.180 1.935 - 1.943: 98.6279% ( 25) 00:15:12.180 1.943 - 1.950: 98.7367% ( 18) 00:15:12.180 1.950 - 1.966: 99.0873% ( 58) 00:15:12.180 1.966 - 1.981: 99.1296% ( 7) 00:15:12.180 1.981 - 1.996: 99.1417% ( 2) 00:15:12.180 2.011 - 2.027: 99.1598% ( 3) 00:15:12.180 2.027 - 2.042: 99.1900% ( 5) 00:15:12.180 2.042 - 2.057: 99.1961% ( 1) 00:15:12.181 2.057 - 2.072: 99.2082% ( 2) 00:15:12.181 2.072 - 2.088: 99.2988% ( 15) 00:15:12.181 2.088 - 2.103: 99.3351% ( 6) 00:15:12.181 2.103 - 2.118: 99.3472% ( 2) 00:15:12.181 2.149 - 2.164: 99.3532% ( 1) 00:15:12.181 3.703 - 3.718: 99.3593% ( 1) 00:15:12.181 3.901 - 3.931: 99.3714% ( 2) 00:15:12.181 3.931 - 3.962: 99.3835% ( 2) 00:15:12.181 4.236 - 4.267: 99.3895% ( 1) 00:15:12.181 4.267 - 4.297: 99.3956% ( 1) 00:15:12.181 4.297 - 4.328: 99.4016% ( 1) 00:15:12.181 4.510 - 4.541: 99.4137% ( 2) 00:15:12.181 4.571 - 4.602: 99.4197% ( 1) 00:15:12.181 4.602 - 4.632: 99.4258% ( 1) 00:15:12.181 4.693 - 4.724: 99.4318% ( 1) 00:15:12.181 4.968 - 4.998: 99.4379% ( 1) 00:15:12.181 4.998 - 5.029: 99.4439% ( 1) 00:15:12.181 5.059 - 5.090: 99.4560% ( 2) 00:15:12.181 5.120 - 5.150: 99.4620% ( 1) 00:15:12.181 5.150 - 5.181: 99.4681% ( 1) 00:15:12.181 5.333 - 5.364: 99.4741% ( 1) 00:15:12.181 5.699 - 5.730: 99.4802% ( 1) 00:15:12.181 5.760 - 5.790: 99.4862% ( 1) 00:15:12.181 5.851 - 5.882: 99.4923% ( 1) 00:15:12.181 6.004 - 6.034: 99.4983% ( 1) 00:15:12.181 6.370 - 6.400: 99.5044% ( 1) 00:15:12.181 8.838 - 8.899: 99.5104% ( 1) 00:15:12.181 12.130 - 12.190: 99.5164% ( 1) 00:15:12.181 2668.251 - 2683.855: 99.5225% ( 1) 00:15:12.181 3042.743 - 3058.347: 99.5285% ( 1) 00:15:12.181 3978.971 - 3994.575: 99.5346% ( 1) 00:15:12.181 3994.575 - 4025.783: 99.9819% ( 74) 00:15:12.181 4993.219 - 5024.427: 99.9940% ( 2) 00:15:12.181 5960.655 - 5991.863: 100.0000% ( 1) 00:15:12.181 00:15:12.181 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:12.181 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:12.181 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:12.181 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:12.181 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:12.181 [ 00:15:12.181 { 00:15:12.181 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:12.181 "subtype": "Discovery", 00:15:12.181 "listen_addresses": [], 00:15:12.181 "allow_any_host": true, 00:15:12.181 "hosts": [] 00:15:12.181 }, 00:15:12.181 { 00:15:12.181 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:12.181 "subtype": "NVMe", 00:15:12.181 "listen_addresses": [ 00:15:12.181 { 00:15:12.181 "trtype": "VFIOUSER", 00:15:12.181 "adrfam": "IPv4", 00:15:12.181 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:12.181 "trsvcid": "0" 00:15:12.181 } 00:15:12.181 ], 00:15:12.181 "allow_any_host": true, 00:15:12.181 "hosts": [], 00:15:12.181 "serial_number": "SPDK1", 00:15:12.181 "model_number": "SPDK bdev Controller", 00:15:12.181 "max_namespaces": 32, 00:15:12.181 "min_cntlid": 1, 00:15:12.181 "max_cntlid": 65519, 00:15:12.181 "namespaces": [ 00:15:12.181 { 00:15:12.181 "nsid": 1, 00:15:12.181 "bdev_name": "Malloc1", 00:15:12.181 "name": "Malloc1", 00:15:12.181 "nguid": "2500D999EC044B069C8F93A2DF07A427", 00:15:12.181 "uuid": "2500d999-ec04-4b06-9c8f-93a2df07a427" 00:15:12.181 } 00:15:12.181 ] 00:15:12.181 }, 00:15:12.181 { 00:15:12.181 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:12.181 "subtype": "NVMe", 00:15:12.181 "listen_addresses": [ 00:15:12.181 { 00:15:12.181 "trtype": "VFIOUSER", 00:15:12.181 "adrfam": "IPv4", 00:15:12.181 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:12.181 "trsvcid": "0" 00:15:12.181 } 00:15:12.181 ], 00:15:12.181 "allow_any_host": true, 00:15:12.181 "hosts": [], 00:15:12.181 "serial_number": "SPDK2", 00:15:12.181 "model_number": "SPDK bdev Controller", 00:15:12.181 "max_namespaces": 32, 00:15:12.181 "min_cntlid": 1, 00:15:12.181 "max_cntlid": 65519, 00:15:12.181 "namespaces": [ 00:15:12.181 { 00:15:12.181 "nsid": 1, 00:15:12.181 "bdev_name": "Malloc2", 00:15:12.181 "name": "Malloc2", 00:15:12.181 "nguid": "177B8CE1BDF547F7A09F32CF9D6CD3F0", 00:15:12.181 "uuid": "177b8ce1-bdf5-47f7-a09f-32cf9d6cd3f0" 00:15:12.181 } 00:15:12.181 ] 00:15:12.181 } 00:15:12.181 ] 00:15:12.181 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:12.181 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1275373 00:15:12.181 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:12.181 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:12.181 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:12.181 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:12.181 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:12.181 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:12.181 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:12.181 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:12.438 [2024-12-05 21:08:20.429762] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:12.438 Malloc3 00:15:12.438 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:12.695 [2024-12-05 21:08:20.694801] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:12.695 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:12.695 Asynchronous Event Request test 00:15:12.695 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:12.695 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:12.695 Registering asynchronous event callbacks... 00:15:12.695 Starting namespace attribute notice tests for all controllers... 00:15:12.695 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:12.695 aer_cb - Changed Namespace 00:15:12.695 Cleaning up... 00:15:12.954 [ 00:15:12.954 { 00:15:12.954 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:12.954 "subtype": "Discovery", 00:15:12.954 "listen_addresses": [], 00:15:12.954 "allow_any_host": true, 00:15:12.954 "hosts": [] 00:15:12.954 }, 00:15:12.954 { 00:15:12.954 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:12.954 "subtype": "NVMe", 00:15:12.954 "listen_addresses": [ 00:15:12.954 { 00:15:12.954 "trtype": "VFIOUSER", 00:15:12.954 "adrfam": "IPv4", 00:15:12.954 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:12.954 "trsvcid": "0" 00:15:12.954 } 00:15:12.954 ], 00:15:12.954 "allow_any_host": true, 00:15:12.954 "hosts": [], 00:15:12.954 "serial_number": "SPDK1", 00:15:12.954 "model_number": "SPDK bdev Controller", 00:15:12.954 "max_namespaces": 32, 00:15:12.954 "min_cntlid": 1, 00:15:12.954 "max_cntlid": 65519, 00:15:12.954 "namespaces": [ 00:15:12.954 { 00:15:12.954 "nsid": 1, 00:15:12.954 "bdev_name": "Malloc1", 00:15:12.954 "name": "Malloc1", 00:15:12.954 "nguid": "2500D999EC044B069C8F93A2DF07A427", 00:15:12.954 "uuid": "2500d999-ec04-4b06-9c8f-93a2df07a427" 00:15:12.954 }, 00:15:12.954 { 00:15:12.954 "nsid": 2, 00:15:12.954 "bdev_name": "Malloc3", 00:15:12.954 "name": "Malloc3", 00:15:12.954 "nguid": "2AC88BC5B7FC4C9CBCABD2E927FAAF17", 00:15:12.954 "uuid": "2ac88bc5-b7fc-4c9c-bcab-d2e927faaf17" 00:15:12.954 } 00:15:12.954 ] 00:15:12.954 }, 00:15:12.954 { 00:15:12.954 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:12.954 "subtype": "NVMe", 00:15:12.954 "listen_addresses": [ 00:15:12.954 { 00:15:12.954 "trtype": "VFIOUSER", 00:15:12.954 "adrfam": "IPv4", 00:15:12.954 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:12.954 "trsvcid": "0" 00:15:12.954 } 00:15:12.954 ], 00:15:12.954 "allow_any_host": true, 00:15:12.954 "hosts": [], 00:15:12.954 "serial_number": "SPDK2", 00:15:12.954 "model_number": "SPDK bdev Controller", 00:15:12.954 "max_namespaces": 32, 00:15:12.954 "min_cntlid": 1, 00:15:12.954 "max_cntlid": 65519, 00:15:12.954 "namespaces": [ 00:15:12.954 { 00:15:12.954 "nsid": 1, 00:15:12.954 "bdev_name": "Malloc2", 00:15:12.954 "name": "Malloc2", 00:15:12.954 "nguid": "177B8CE1BDF547F7A09F32CF9D6CD3F0", 00:15:12.954 "uuid": "177b8ce1-bdf5-47f7-a09f-32cf9d6cd3f0" 00:15:12.954 } 00:15:12.954 ] 00:15:12.954 } 00:15:12.954 ] 00:15:12.954 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1275373 00:15:12.954 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:12.954 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:12.954 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:12.954 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:12.954 [2024-12-05 21:08:20.944363] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:15:12.954 [2024-12-05 21:08:20.944424] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275599 ] 00:15:12.954 [2024-12-05 21:08:20.981902] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:12.954 [2024-12-05 21:08:20.991207] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:12.954 [2024-12-05 21:08:20.991232] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc93e9b2000 00:15:12.954 [2024-12-05 21:08:20.992204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.954 [2024-12-05 21:08:20.993204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.954 [2024-12-05 21:08:20.994216] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.954 [2024-12-05 21:08:20.995221] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:12.954 [2024-12-05 21:08:20.996234] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:12.954 [2024-12-05 21:08:20.997244] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.954 [2024-12-05 21:08:20.998250] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:12.954 [2024-12-05 21:08:20.999255] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.954 [2024-12-05 21:08:21.000264] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:12.954 [2024-12-05 21:08:21.000274] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc93e9a7000 00:15:12.954 [2024-12-05 21:08:21.001186] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:12.954 [2024-12-05 21:08:21.010552] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:12.954 [2024-12-05 21:08:21.010575] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:12.954 [2024-12-05 21:08:21.015658] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:12.954 [2024-12-05 21:08:21.015695] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:12.954 [2024-12-05 21:08:21.015766] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:12.954 [2024-12-05 21:08:21.015778] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:12.954 [2024-12-05 21:08:21.015786] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:12.954 [2024-12-05 21:08:21.016661] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:12.954 [2024-12-05 21:08:21.016671] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:12.954 [2024-12-05 21:08:21.016677] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:12.954 [2024-12-05 21:08:21.017666] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:12.954 [2024-12-05 21:08:21.017674] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:12.954 [2024-12-05 21:08:21.017680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:12.954 [2024-12-05 21:08:21.018677] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:12.954 [2024-12-05 21:08:21.018686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:12.954 [2024-12-05 21:08:21.019676] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:12.954 [2024-12-05 21:08:21.019685] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:12.954 [2024-12-05 21:08:21.019690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:12.954 [2024-12-05 21:08:21.019695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:12.954 [2024-12-05 21:08:21.019803] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:12.954 [2024-12-05 21:08:21.019807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:12.954 [2024-12-05 21:08:21.019812] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:12.955 [2024-12-05 21:08:21.020682] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:12.955 [2024-12-05 21:08:21.021692] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:12.955 [2024-12-05 21:08:21.022701] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:12.955 [2024-12-05 21:08:21.023706] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:12.955 [2024-12-05 21:08:21.023744] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:12.955 [2024-12-05 21:08:21.024720] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:12.955 [2024-12-05 21:08:21.024729] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:12.955 [2024-12-05 21:08:21.024733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:12.955 [2024-12-05 21:08:21.024752] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:12.955 [2024-12-05 21:08:21.024759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:12.955 [2024-12-05 21:08:21.024773] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:12.955 [2024-12-05 21:08:21.024777] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:12.955 [2024-12-05 21:08:21.024780] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.955 [2024-12-05 21:08:21.024790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.955 [2024-12-05 21:08:21.031376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:12.955 [2024-12-05 21:08:21.031387] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:12.955 [2024-12-05 21:08:21.031393] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:12.955 [2024-12-05 21:08:21.031397] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:12.955 [2024-12-05 21:08:21.031402] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:12.955 [2024-12-05 21:08:21.031406] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:12.955 [2024-12-05 21:08:21.031410] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:12.955 [2024-12-05 21:08:21.031414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:12.955 [2024-12-05 21:08:21.031421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:12.955 [2024-12-05 21:08:21.031430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:12.955 [2024-12-05 21:08:21.039373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:12.955 [2024-12-05 21:08:21.039384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.955 [2024-12-05 21:08:21.039392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.955 [2024-12-05 21:08:21.039399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.955 [2024-12-05 21:08:21.039407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.955 [2024-12-05 21:08:21.039411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:12.955 [2024-12-05 21:08:21.039419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:12.955 [2024-12-05 21:08:21.039427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:12.955 [2024-12-05 21:08:21.047373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:12.955 [2024-12-05 21:08:21.047380] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:12.955 [2024-12-05 21:08:21.047386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:12.955 [2024-12-05 21:08:21.047393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:12.955 [2024-12-05 21:08:21.047398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:12.955 [2024-12-05 21:08:21.047406] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:12.955 [2024-12-05 21:08:21.055372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:12.955 [2024-12-05 21:08:21.055428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:12.955 [2024-12-05 21:08:21.055435] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:12.955 [2024-12-05 21:08:21.055442] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:12.955 [2024-12-05 21:08:21.055446] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:12.955 [2024-12-05 21:08:21.055449] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.955 [2024-12-05 21:08:21.055455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:13.214 [2024-12-05 21:08:21.063372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:13.214 [2024-12-05 21:08:21.063382] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:13.214 [2024-12-05 21:08:21.063394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:13.214 [2024-12-05 21:08:21.063401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:13.214 [2024-12-05 21:08:21.063407] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:13.214 [2024-12-05 21:08:21.063411] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:13.214 [2024-12-05 21:08:21.063415] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:13.214 [2024-12-05 21:08:21.063420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:13.214 [2024-12-05 21:08:21.071372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:13.214 [2024-12-05 21:08:21.071385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:13.214 [2024-12-05 21:08:21.071393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:13.214 [2024-12-05 21:08:21.071399] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:13.214 [2024-12-05 21:08:21.071404] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:13.214 [2024-12-05 21:08:21.071406] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:13.214 [2024-12-05 21:08:21.071412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:13.214 [2024-12-05 21:08:21.079373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:13.214 [2024-12-05 21:08:21.079382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:13.214 [2024-12-05 21:08:21.079388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:13.214 [2024-12-05 21:08:21.079396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:13.214 [2024-12-05 21:08:21.079401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:13.214 [2024-12-05 21:08:21.079406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:13.214 [2024-12-05 21:08:21.079410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:13.214 [2024-12-05 21:08:21.079415] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:13.214 [2024-12-05 21:08:21.079419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:13.214 [2024-12-05 21:08:21.079424] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:13.214 [2024-12-05 21:08:21.079439] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:13.214 [2024-12-05 21:08:21.087370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:13.214 [2024-12-05 21:08:21.087383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:13.214 [2024-12-05 21:08:21.095372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:13.214 [2024-12-05 21:08:21.095384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:13.214 [2024-12-05 21:08:21.103371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:13.214 [2024-12-05 21:08:21.103384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:13.214 [2024-12-05 21:08:21.111373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:13.214 [2024-12-05 21:08:21.111392] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:13.214 [2024-12-05 21:08:21.111396] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:13.214 [2024-12-05 21:08:21.111400] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:13.214 [2024-12-05 21:08:21.111403] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:13.214 [2024-12-05 21:08:21.111406] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:13.214 [2024-12-05 21:08:21.111411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:13.214 [2024-12-05 21:08:21.111418] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:13.214 [2024-12-05 21:08:21.111424] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:13.214 [2024-12-05 21:08:21.111430] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:13.214 [2024-12-05 21:08:21.111436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:13.214 [2024-12-05 21:08:21.111443] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:13.214 [2024-12-05 21:08:21.111447] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:13.214 [2024-12-05 21:08:21.111451] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:13.214 [2024-12-05 21:08:21.111456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:13.214 [2024-12-05 21:08:21.111463] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:13.214 [2024-12-05 21:08:21.111468] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:13.214 [2024-12-05 21:08:21.111471] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:13.214 [2024-12-05 21:08:21.111477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:13.214 [2024-12-05 21:08:21.119373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:13.214 [2024-12-05 21:08:21.119389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:13.214 [2024-12-05 21:08:21.119398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:13.214 [2024-12-05 21:08:21.119404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:13.214 ===================================================== 00:15:13.214 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:13.214 ===================================================== 00:15:13.214 Controller Capabilities/Features 00:15:13.214 ================================ 00:15:13.214 Vendor ID: 4e58 00:15:13.214 Subsystem Vendor ID: 4e58 00:15:13.214 Serial Number: SPDK2 00:15:13.214 Model Number: SPDK bdev Controller 00:15:13.214 Firmware Version: 25.01 00:15:13.214 Recommended Arb Burst: 6 00:15:13.214 IEEE OUI Identifier: 8d 6b 50 00:15:13.214 Multi-path I/O 00:15:13.214 May have multiple subsystem ports: Yes 00:15:13.214 May have multiple controllers: Yes 00:15:13.214 Associated with SR-IOV VF: No 00:15:13.214 Max Data Transfer Size: 131072 00:15:13.214 Max Number of Namespaces: 32 00:15:13.214 Max Number of I/O Queues: 127 00:15:13.214 NVMe Specification Version (VS): 1.3 00:15:13.214 NVMe Specification Version (Identify): 1.3 00:15:13.214 Maximum Queue Entries: 256 00:15:13.214 Contiguous Queues Required: Yes 00:15:13.214 Arbitration Mechanisms Supported 00:15:13.214 Weighted Round Robin: Not Supported 00:15:13.214 Vendor Specific: Not Supported 00:15:13.214 Reset Timeout: 15000 ms 00:15:13.214 Doorbell Stride: 4 bytes 00:15:13.214 NVM Subsystem Reset: Not Supported 00:15:13.214 Command Sets Supported 00:15:13.214 NVM Command Set: Supported 00:15:13.214 Boot Partition: Not Supported 00:15:13.214 Memory Page Size Minimum: 4096 bytes 00:15:13.214 Memory Page Size Maximum: 4096 bytes 00:15:13.214 Persistent Memory Region: Not Supported 00:15:13.214 Optional Asynchronous Events Supported 00:15:13.214 Namespace Attribute Notices: Supported 00:15:13.214 Firmware Activation Notices: Not Supported 00:15:13.215 ANA Change Notices: Not Supported 00:15:13.215 PLE Aggregate Log Change Notices: Not Supported 00:15:13.215 LBA Status Info Alert Notices: Not Supported 00:15:13.215 EGE Aggregate Log Change Notices: Not Supported 00:15:13.215 Normal NVM Subsystem Shutdown event: Not Supported 00:15:13.215 Zone Descriptor Change Notices: Not Supported 00:15:13.215 Discovery Log Change Notices: Not Supported 00:15:13.215 Controller Attributes 00:15:13.215 128-bit Host Identifier: Supported 00:15:13.215 Non-Operational Permissive Mode: Not Supported 00:15:13.215 NVM Sets: Not Supported 00:15:13.215 Read Recovery Levels: Not Supported 00:15:13.215 Endurance Groups: Not Supported 00:15:13.215 Predictable Latency Mode: Not Supported 00:15:13.215 Traffic Based Keep ALive: Not Supported 00:15:13.215 Namespace Granularity: Not Supported 00:15:13.215 SQ Associations: Not Supported 00:15:13.215 UUID List: Not Supported 00:15:13.215 Multi-Domain Subsystem: Not Supported 00:15:13.215 Fixed Capacity Management: Not Supported 00:15:13.215 Variable Capacity Management: Not Supported 00:15:13.215 Delete Endurance Group: Not Supported 00:15:13.215 Delete NVM Set: Not Supported 00:15:13.215 Extended LBA Formats Supported: Not Supported 00:15:13.215 Flexible Data Placement Supported: Not Supported 00:15:13.215 00:15:13.215 Controller Memory Buffer Support 00:15:13.215 ================================ 00:15:13.215 Supported: No 00:15:13.215 00:15:13.215 Persistent Memory Region Support 00:15:13.215 ================================ 00:15:13.215 Supported: No 00:15:13.215 00:15:13.215 Admin Command Set Attributes 00:15:13.215 ============================ 00:15:13.215 Security Send/Receive: Not Supported 00:15:13.215 Format NVM: Not Supported 00:15:13.215 Firmware Activate/Download: Not Supported 00:15:13.215 Namespace Management: Not Supported 00:15:13.215 Device Self-Test: Not Supported 00:15:13.215 Directives: Not Supported 00:15:13.215 NVMe-MI: Not Supported 00:15:13.215 Virtualization Management: Not Supported 00:15:13.215 Doorbell Buffer Config: Not Supported 00:15:13.215 Get LBA Status Capability: Not Supported 00:15:13.215 Command & Feature Lockdown Capability: Not Supported 00:15:13.215 Abort Command Limit: 4 00:15:13.215 Async Event Request Limit: 4 00:15:13.215 Number of Firmware Slots: N/A 00:15:13.215 Firmware Slot 1 Read-Only: N/A 00:15:13.215 Firmware Activation Without Reset: N/A 00:15:13.215 Multiple Update Detection Support: N/A 00:15:13.215 Firmware Update Granularity: No Information Provided 00:15:13.215 Per-Namespace SMART Log: No 00:15:13.215 Asymmetric Namespace Access Log Page: Not Supported 00:15:13.215 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:13.215 Command Effects Log Page: Supported 00:15:13.215 Get Log Page Extended Data: Supported 00:15:13.215 Telemetry Log Pages: Not Supported 00:15:13.215 Persistent Event Log Pages: Not Supported 00:15:13.215 Supported Log Pages Log Page: May Support 00:15:13.215 Commands Supported & Effects Log Page: Not Supported 00:15:13.215 Feature Identifiers & Effects Log Page:May Support 00:15:13.215 NVMe-MI Commands & Effects Log Page: May Support 00:15:13.215 Data Area 4 for Telemetry Log: Not Supported 00:15:13.215 Error Log Page Entries Supported: 128 00:15:13.215 Keep Alive: Supported 00:15:13.215 Keep Alive Granularity: 10000 ms 00:15:13.215 00:15:13.215 NVM Command Set Attributes 00:15:13.215 ========================== 00:15:13.215 Submission Queue Entry Size 00:15:13.215 Max: 64 00:15:13.215 Min: 64 00:15:13.215 Completion Queue Entry Size 00:15:13.215 Max: 16 00:15:13.215 Min: 16 00:15:13.215 Number of Namespaces: 32 00:15:13.215 Compare Command: Supported 00:15:13.215 Write Uncorrectable Command: Not Supported 00:15:13.215 Dataset Management Command: Supported 00:15:13.215 Write Zeroes Command: Supported 00:15:13.215 Set Features Save Field: Not Supported 00:15:13.215 Reservations: Not Supported 00:15:13.215 Timestamp: Not Supported 00:15:13.215 Copy: Supported 00:15:13.215 Volatile Write Cache: Present 00:15:13.215 Atomic Write Unit (Normal): 1 00:15:13.215 Atomic Write Unit (PFail): 1 00:15:13.215 Atomic Compare & Write Unit: 1 00:15:13.215 Fused Compare & Write: Supported 00:15:13.215 Scatter-Gather List 00:15:13.215 SGL Command Set: Supported (Dword aligned) 00:15:13.215 SGL Keyed: Not Supported 00:15:13.215 SGL Bit Bucket Descriptor: Not Supported 00:15:13.215 SGL Metadata Pointer: Not Supported 00:15:13.215 Oversized SGL: Not Supported 00:15:13.215 SGL Metadata Address: Not Supported 00:15:13.215 SGL Offset: Not Supported 00:15:13.215 Transport SGL Data Block: Not Supported 00:15:13.215 Replay Protected Memory Block: Not Supported 00:15:13.215 00:15:13.215 Firmware Slot Information 00:15:13.215 ========================= 00:15:13.215 Active slot: 1 00:15:13.215 Slot 1 Firmware Revision: 25.01 00:15:13.215 00:15:13.215 00:15:13.215 Commands Supported and Effects 00:15:13.215 ============================== 00:15:13.215 Admin Commands 00:15:13.215 -------------- 00:15:13.215 Get Log Page (02h): Supported 00:15:13.215 Identify (06h): Supported 00:15:13.215 Abort (08h): Supported 00:15:13.215 Set Features (09h): Supported 00:15:13.215 Get Features (0Ah): Supported 00:15:13.215 Asynchronous Event Request (0Ch): Supported 00:15:13.215 Keep Alive (18h): Supported 00:15:13.215 I/O Commands 00:15:13.215 ------------ 00:15:13.215 Flush (00h): Supported LBA-Change 00:15:13.215 Write (01h): Supported LBA-Change 00:15:13.215 Read (02h): Supported 00:15:13.215 Compare (05h): Supported 00:15:13.215 Write Zeroes (08h): Supported LBA-Change 00:15:13.215 Dataset Management (09h): Supported LBA-Change 00:15:13.215 Copy (19h): Supported LBA-Change 00:15:13.215 00:15:13.215 Error Log 00:15:13.215 ========= 00:15:13.215 00:15:13.215 Arbitration 00:15:13.215 =========== 00:15:13.215 Arbitration Burst: 1 00:15:13.215 00:15:13.215 Power Management 00:15:13.215 ================ 00:15:13.215 Number of Power States: 1 00:15:13.215 Current Power State: Power State #0 00:15:13.215 Power State #0: 00:15:13.215 Max Power: 0.00 W 00:15:13.215 Non-Operational State: Operational 00:15:13.215 Entry Latency: Not Reported 00:15:13.215 Exit Latency: Not Reported 00:15:13.215 Relative Read Throughput: 0 00:15:13.215 Relative Read Latency: 0 00:15:13.215 Relative Write Throughput: 0 00:15:13.215 Relative Write Latency: 0 00:15:13.215 Idle Power: Not Reported 00:15:13.215 Active Power: Not Reported 00:15:13.215 Non-Operational Permissive Mode: Not Supported 00:15:13.215 00:15:13.215 Health Information 00:15:13.215 ================== 00:15:13.215 Critical Warnings: 00:15:13.215 Available Spare Space: OK 00:15:13.215 Temperature: OK 00:15:13.215 Device Reliability: OK 00:15:13.215 Read Only: No 00:15:13.215 Volatile Memory Backup: OK 00:15:13.215 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:13.215 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:13.215 Available Spare: 0% 00:15:13.215 Available Sp[2024-12-05 21:08:21.119491] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:13.215 [2024-12-05 21:08:21.127375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:13.215 [2024-12-05 21:08:21.127403] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:13.215 [2024-12-05 21:08:21.127411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.215 [2024-12-05 21:08:21.127417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.215 [2024-12-05 21:08:21.127422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.215 [2024-12-05 21:08:21.127428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.215 [2024-12-05 21:08:21.127467] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:13.215 [2024-12-05 21:08:21.127477] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:13.215 [2024-12-05 21:08:21.128469] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:13.215 [2024-12-05 21:08:21.128513] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:13.215 [2024-12-05 21:08:21.128519] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:13.215 [2024-12-05 21:08:21.129469] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:13.215 [2024-12-05 21:08:21.129484] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:13.215 [2024-12-05 21:08:21.129529] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:13.215 [2024-12-05 21:08:21.130492] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:13.215 are Threshold: 0% 00:15:13.215 Life Percentage Used: 0% 00:15:13.216 Data Units Read: 0 00:15:13.216 Data Units Written: 0 00:15:13.216 Host Read Commands: 0 00:15:13.216 Host Write Commands: 0 00:15:13.216 Controller Busy Time: 0 minutes 00:15:13.216 Power Cycles: 0 00:15:13.216 Power On Hours: 0 hours 00:15:13.216 Unsafe Shutdowns: 0 00:15:13.216 Unrecoverable Media Errors: 0 00:15:13.216 Lifetime Error Log Entries: 0 00:15:13.216 Warning Temperature Time: 0 minutes 00:15:13.216 Critical Temperature Time: 0 minutes 00:15:13.216 00:15:13.216 Number of Queues 00:15:13.216 ================ 00:15:13.216 Number of I/O Submission Queues: 127 00:15:13.216 Number of I/O Completion Queues: 127 00:15:13.216 00:15:13.216 Active Namespaces 00:15:13.216 ================= 00:15:13.216 Namespace ID:1 00:15:13.216 Error Recovery Timeout: Unlimited 00:15:13.216 Command Set Identifier: NVM (00h) 00:15:13.216 Deallocate: Supported 00:15:13.216 Deallocated/Unwritten Error: Not Supported 00:15:13.216 Deallocated Read Value: Unknown 00:15:13.216 Deallocate in Write Zeroes: Not Supported 00:15:13.216 Deallocated Guard Field: 0xFFFF 00:15:13.216 Flush: Supported 00:15:13.216 Reservation: Supported 00:15:13.216 Namespace Sharing Capabilities: Multiple Controllers 00:15:13.216 Size (in LBAs): 131072 (0GiB) 00:15:13.216 Capacity (in LBAs): 131072 (0GiB) 00:15:13.216 Utilization (in LBAs): 131072 (0GiB) 00:15:13.216 NGUID: 177B8CE1BDF547F7A09F32CF9D6CD3F0 00:15:13.216 UUID: 177b8ce1-bdf5-47f7-a09f-32cf9d6cd3f0 00:15:13.216 Thin Provisioning: Not Supported 00:15:13.216 Per-NS Atomic Units: Yes 00:15:13.216 Atomic Boundary Size (Normal): 0 00:15:13.216 Atomic Boundary Size (PFail): 0 00:15:13.216 Atomic Boundary Offset: 0 00:15:13.216 Maximum Single Source Range Length: 65535 00:15:13.216 Maximum Copy Length: 65535 00:15:13.216 Maximum Source Range Count: 1 00:15:13.216 NGUID/EUI64 Never Reused: No 00:15:13.216 Namespace Write Protected: No 00:15:13.216 Number of LBA Formats: 1 00:15:13.216 Current LBA Format: LBA Format #00 00:15:13.216 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:13.216 00:15:13.216 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:13.473 [2024-12-05 21:08:21.358614] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:18.838 Initializing NVMe Controllers 00:15:18.838 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:18.838 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:18.838 Initialization complete. Launching workers. 00:15:18.838 ======================================================== 00:15:18.838 Latency(us) 00:15:18.838 Device Information : IOPS MiB/s Average min max 00:15:18.838 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39976.53 156.16 3202.24 977.21 8103.42 00:15:18.838 ======================================================== 00:15:18.838 Total : 39976.53 156.16 3202.24 977.21 8103.42 00:15:18.838 00:15:18.838 [2024-12-05 21:08:26.463621] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:18.838 21:08:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:18.838 [2024-12-05 21:08:26.692316] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:24.092 Initializing NVMe Controllers 00:15:24.092 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:24.092 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:24.092 Initialization complete. Launching workers. 00:15:24.092 ======================================================== 00:15:24.092 Latency(us) 00:15:24.093 Device Information : IOPS MiB/s Average min max 00:15:24.093 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39890.39 155.82 3208.39 968.16 7423.55 00:15:24.093 ======================================================== 00:15:24.093 Total : 39890.39 155.82 3208.39 968.16 7423.55 00:15:24.093 00:15:24.093 [2024-12-05 21:08:31.711395] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:24.093 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:24.093 [2024-12-05 21:08:31.923627] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.354 [2024-12-05 21:08:37.056466] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.354 Initializing NVMe Controllers 00:15:29.354 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:29.354 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:29.354 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:29.354 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:29.354 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:29.354 Initialization complete. Launching workers. 00:15:29.354 Starting thread on core 2 00:15:29.354 Starting thread on core 3 00:15:29.354 Starting thread on core 1 00:15:29.354 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:29.354 [2024-12-05 21:08:37.357874] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:32.637 [2024-12-05 21:08:40.413192] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:32.637 Initializing NVMe Controllers 00:15:32.637 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:32.637 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:32.637 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:32.637 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:32.637 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:32.637 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:32.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:32.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:32.637 Initialization complete. Launching workers. 00:15:32.637 Starting thread on core 1 with urgent priority queue 00:15:32.637 Starting thread on core 2 with urgent priority queue 00:15:32.637 Starting thread on core 3 with urgent priority queue 00:15:32.637 Starting thread on core 0 with urgent priority queue 00:15:32.637 SPDK bdev Controller (SPDK2 ) core 0: 9528.00 IO/s 10.50 secs/100000 ios 00:15:32.637 SPDK bdev Controller (SPDK2 ) core 1: 8772.00 IO/s 11.40 secs/100000 ios 00:15:32.637 SPDK bdev Controller (SPDK2 ) core 2: 8578.00 IO/s 11.66 secs/100000 ios 00:15:32.637 SPDK bdev Controller (SPDK2 ) core 3: 10239.00 IO/s 9.77 secs/100000 ios 00:15:32.637 ======================================================== 00:15:32.637 00:15:32.637 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:32.637 [2024-12-05 21:08:40.699276] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:32.637 Initializing NVMe Controllers 00:15:32.637 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:32.637 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:32.637 Namespace ID: 1 size: 0GB 00:15:32.637 Initialization complete. 00:15:32.637 INFO: using host memory buffer for IO 00:15:32.637 Hello world! 00:15:32.637 [2024-12-05 21:08:40.710349] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:32.895 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:32.895 [2024-12-05 21:08:40.994072] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.268 Initializing NVMe Controllers 00:15:34.268 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:34.268 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:34.268 Initialization complete. Launching workers. 00:15:34.268 submit (in ns) avg, min, max = 6828.6, 3201.0, 3998617.1 00:15:34.268 complete (in ns) avg, min, max = 20138.0, 1773.3, 4992675.2 00:15:34.268 00:15:34.268 Submit histogram 00:15:34.268 ================ 00:15:34.268 Range in us Cumulative Count 00:15:34.268 3.200 - 3.215: 0.0365% ( 6) 00:15:34.268 3.215 - 3.230: 0.1948% ( 26) 00:15:34.268 3.230 - 3.246: 0.4382% ( 40) 00:15:34.268 3.246 - 3.261: 0.8948% ( 75) 00:15:34.268 3.261 - 3.276: 2.7878% ( 311) 00:15:34.268 3.276 - 3.291: 7.5233% ( 778) 00:15:34.268 3.291 - 3.307: 13.3849% ( 963) 00:15:34.268 3.307 - 3.322: 19.8308% ( 1059) 00:15:34.268 3.322 - 3.337: 26.4167% ( 1082) 00:15:34.268 3.337 - 3.352: 32.5826% ( 1013) 00:15:34.268 3.352 - 3.368: 37.7868% ( 855) 00:15:34.268 3.368 - 3.383: 43.9102% ( 1006) 00:15:34.268 3.383 - 3.398: 49.7413% ( 958) 00:15:34.268 3.398 - 3.413: 54.8786% ( 844) 00:15:34.268 3.413 - 3.429: 60.3141% ( 893) 00:15:34.268 3.429 - 3.444: 68.1295% ( 1284) 00:15:34.268 3.444 - 3.459: 73.7963% ( 931) 00:15:34.268 3.459 - 3.474: 78.8910% ( 837) 00:15:34.268 3.474 - 3.490: 82.7744% ( 638) 00:15:34.268 3.490 - 3.505: 85.3430% ( 422) 00:15:34.268 3.505 - 3.520: 87.1264% ( 293) 00:15:34.268 3.520 - 3.535: 87.7047% ( 95) 00:15:34.268 3.535 - 3.550: 88.0516% ( 57) 00:15:34.268 3.550 - 3.566: 88.4047% ( 58) 00:15:34.268 3.566 - 3.581: 88.9464% ( 89) 00:15:34.268 3.581 - 3.596: 89.8594% ( 150) 00:15:34.268 3.596 - 3.611: 90.8333% ( 160) 00:15:34.268 3.611 - 3.627: 91.7159% ( 145) 00:15:34.268 3.627 - 3.642: 92.7080% ( 163) 00:15:34.268 3.642 - 3.657: 93.5967% ( 146) 00:15:34.268 3.657 - 3.672: 94.5036% ( 149) 00:15:34.268 3.672 - 3.688: 95.4349% ( 153) 00:15:34.268 3.688 - 3.703: 96.4818% ( 172) 00:15:34.268 3.703 - 3.718: 97.2914% ( 133) 00:15:34.268 3.718 - 3.733: 97.8696% ( 95) 00:15:34.268 3.733 - 3.749: 98.3261% ( 75) 00:15:34.268 3.749 - 3.764: 98.7035% ( 62) 00:15:34.268 3.764 - 3.779: 99.0139% ( 51) 00:15:34.268 3.779 - 3.794: 99.2513% ( 39) 00:15:34.268 3.794 - 3.810: 99.3670% ( 19) 00:15:34.268 3.810 - 3.825: 99.4826% ( 19) 00:15:34.268 3.825 - 3.840: 99.5557% ( 12) 00:15:34.268 3.840 - 3.855: 99.5922% ( 6) 00:15:34.268 3.855 - 3.870: 99.6287% ( 6) 00:15:34.268 3.870 - 3.886: 99.6409% ( 2) 00:15:34.268 4.998 - 5.029: 99.6470% ( 1) 00:15:34.268 5.272 - 5.303: 99.6531% ( 1) 00:15:34.268 5.333 - 5.364: 99.6591% ( 1) 00:15:34.268 5.364 - 5.394: 99.6652% ( 1) 00:15:34.268 5.455 - 5.486: 99.6713% ( 1) 00:15:34.268 5.486 - 5.516: 99.6835% ( 2) 00:15:34.268 5.547 - 5.577: 99.6957% ( 2) 00:15:34.268 5.577 - 5.608: 99.7017% ( 1) 00:15:34.268 5.608 - 5.638: 99.7139% ( 2) 00:15:34.268 5.790 - 5.821: 99.7200% ( 1) 00:15:34.268 5.851 - 5.882: 99.7261% ( 1) 00:15:34.268 5.882 - 5.912: 99.7322% ( 1) 00:15:34.268 5.943 - 5.973: 99.7444% ( 2) 00:15:34.268 6.004 - 6.034: 99.7504% ( 1) 00:15:34.268 6.248 - 6.278: 99.7626% ( 2) 00:15:34.268 6.278 - 6.309: 99.7687% ( 1) 00:15:34.268 6.309 - 6.339: 99.7748% ( 1) 00:15:34.268 6.400 - 6.430: 99.7809% ( 1) 00:15:34.268 6.430 - 6.461: 99.7870% ( 1) 00:15:34.268 6.461 - 6.491: 99.7991% ( 2) 00:15:34.269 6.491 - 6.522: 99.8052% ( 1) 00:15:34.269 6.552 - 6.583: 99.8113% ( 1) 00:15:34.269 6.583 - 6.613: 99.8174% ( 1) 00:15:34.269 6.644 - 6.674: 99.8235% ( 1) 00:15:34.269 6.674 - 6.705: 99.8296% ( 1) 00:15:34.269 6.888 - 6.918: 99.8357% ( 1) 00:15:34.269 6.918 - 6.949: 99.8417% ( 1) 00:15:34.269 7.010 - 7.040: 99.8478% ( 1) 00:15:34.269 7.040 - 7.070: 99.8539% ( 1) 00:15:34.269 7.162 - 7.192: 99.8600% ( 1) 00:15:34.269 7.284 - 7.314: 99.8661% ( 1) 00:15:34.269 7.375 - 7.406: 99.8722% ( 1) 00:15:34.269 7.436 - 7.467: 99.8783% ( 1) 00:15:34.269 7.497 - 7.528: 99.8904% ( 2) 00:15:34.269 [2024-12-05 21:08:42.089349] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.269 7.589 - 7.619: 99.9087% ( 3) 00:15:34.269 8.350 - 8.411: 99.9148% ( 1) 00:15:34.269 3994.575 - 4025.783: 100.0000% ( 14) 00:15:34.269 00:15:34.269 Complete histogram 00:15:34.269 ================== 00:15:34.269 Range in us Cumulative Count 00:15:34.269 1.768 - 1.775: 0.0304% ( 5) 00:15:34.269 1.775 - 1.783: 0.3774% ( 57) 00:15:34.269 1.783 - 1.790: 0.9008% ( 86) 00:15:34.269 1.790 - 1.798: 1.6313% ( 120) 00:15:34.269 1.798 - 1.806: 2.3069% ( 111) 00:15:34.269 1.806 - 1.813: 2.6417% ( 55) 00:15:34.269 1.813 - 1.821: 3.5973% ( 157) 00:15:34.269 1.821 - 1.829: 13.5127% ( 1629) 00:15:34.269 1.829 - 1.836: 44.1110% ( 5027) 00:15:34.269 1.836 - 1.844: 74.7885% ( 5040) 00:15:34.269 1.844 - 1.851: 87.9177% ( 2157) 00:15:34.269 1.851 - 1.859: 92.6654% ( 780) 00:15:34.269 1.859 - 1.867: 95.2219% ( 420) 00:15:34.269 1.867 - 1.874: 96.2140% ( 163) 00:15:34.269 1.874 - 1.882: 96.5914% ( 62) 00:15:34.269 1.882 - 1.890: 96.7862% ( 32) 00:15:34.269 1.890 - 1.897: 97.1636% ( 62) 00:15:34.269 1.897 - 1.905: 97.7540% ( 97) 00:15:34.269 1.905 - 1.912: 98.3505% ( 98) 00:15:34.269 1.912 - 1.920: 98.7948% ( 73) 00:15:34.269 1.920 - 1.928: 99.0931% ( 49) 00:15:34.269 1.928 - 1.935: 99.2818% ( 31) 00:15:34.269 1.935 - 1.943: 99.3852% ( 17) 00:15:34.269 1.943 - 1.950: 99.4096% ( 4) 00:15:34.269 1.950 - 1.966: 99.4157% ( 1) 00:15:34.269 1.981 - 1.996: 99.4218% ( 1) 00:15:34.269 2.453 - 2.469: 99.4278% ( 1) 00:15:34.269 3.825 - 3.840: 99.4339% ( 1) 00:15:34.269 3.855 - 3.870: 99.4400% ( 1) 00:15:34.269 3.870 - 3.886: 99.4461% ( 1) 00:15:34.269 3.962 - 3.992: 99.4522% ( 1) 00:15:34.269 4.175 - 4.206: 99.4583% ( 1) 00:15:34.269 4.206 - 4.236: 99.4644% ( 1) 00:15:34.269 4.236 - 4.267: 99.4704% ( 1) 00:15:34.269 4.328 - 4.358: 99.4765% ( 1) 00:15:34.269 4.450 - 4.480: 99.4887% ( 2) 00:15:34.269 4.632 - 4.663: 99.4948% ( 1) 00:15:34.269 4.724 - 4.754: 99.5009% ( 1) 00:15:34.269 5.303 - 5.333: 99.5070% ( 1) 00:15:34.269 5.394 - 5.425: 99.5131% ( 1) 00:15:34.269 5.638 - 5.669: 99.5191% ( 1) 00:15:34.269 5.790 - 5.821: 99.5313% ( 2) 00:15:34.269 7.741 - 7.771: 99.5374% ( 1) 00:15:34.269 13.227 - 13.288: 99.5435% ( 1) 00:15:34.269 3978.971 - 3994.575: 99.5496% ( 1) 00:15:34.269 3994.575 - 4025.783: 99.9939% ( 73) 00:15:34.269 4962.011 - 4993.219: 100.0000% ( 1) 00:15:34.269 00:15:34.269 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:34.269 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:34.269 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:34.269 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:34.269 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:34.269 [ 00:15:34.269 { 00:15:34.269 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:34.269 "subtype": "Discovery", 00:15:34.269 "listen_addresses": [], 00:15:34.269 "allow_any_host": true, 00:15:34.269 "hosts": [] 00:15:34.269 }, 00:15:34.269 { 00:15:34.269 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:34.269 "subtype": "NVMe", 00:15:34.269 "listen_addresses": [ 00:15:34.269 { 00:15:34.269 "trtype": "VFIOUSER", 00:15:34.269 "adrfam": "IPv4", 00:15:34.269 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:34.269 "trsvcid": "0" 00:15:34.269 } 00:15:34.269 ], 00:15:34.269 "allow_any_host": true, 00:15:34.269 "hosts": [], 00:15:34.269 "serial_number": "SPDK1", 00:15:34.269 "model_number": "SPDK bdev Controller", 00:15:34.269 "max_namespaces": 32, 00:15:34.269 "min_cntlid": 1, 00:15:34.269 "max_cntlid": 65519, 00:15:34.269 "namespaces": [ 00:15:34.269 { 00:15:34.269 "nsid": 1, 00:15:34.269 "bdev_name": "Malloc1", 00:15:34.269 "name": "Malloc1", 00:15:34.269 "nguid": "2500D999EC044B069C8F93A2DF07A427", 00:15:34.269 "uuid": "2500d999-ec04-4b06-9c8f-93a2df07a427" 00:15:34.269 }, 00:15:34.269 { 00:15:34.269 "nsid": 2, 00:15:34.269 "bdev_name": "Malloc3", 00:15:34.269 "name": "Malloc3", 00:15:34.269 "nguid": "2AC88BC5B7FC4C9CBCABD2E927FAAF17", 00:15:34.269 "uuid": "2ac88bc5-b7fc-4c9c-bcab-d2e927faaf17" 00:15:34.269 } 00:15:34.269 ] 00:15:34.269 }, 00:15:34.269 { 00:15:34.269 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:34.269 "subtype": "NVMe", 00:15:34.269 "listen_addresses": [ 00:15:34.269 { 00:15:34.269 "trtype": "VFIOUSER", 00:15:34.269 "adrfam": "IPv4", 00:15:34.269 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:34.269 "trsvcid": "0" 00:15:34.269 } 00:15:34.269 ], 00:15:34.269 "allow_any_host": true, 00:15:34.269 "hosts": [], 00:15:34.269 "serial_number": "SPDK2", 00:15:34.269 "model_number": "SPDK bdev Controller", 00:15:34.269 "max_namespaces": 32, 00:15:34.269 "min_cntlid": 1, 00:15:34.269 "max_cntlid": 65519, 00:15:34.269 "namespaces": [ 00:15:34.269 { 00:15:34.269 "nsid": 1, 00:15:34.269 "bdev_name": "Malloc2", 00:15:34.269 "name": "Malloc2", 00:15:34.269 "nguid": "177B8CE1BDF547F7A09F32CF9D6CD3F0", 00:15:34.269 "uuid": "177b8ce1-bdf5-47f7-a09f-32cf9d6cd3f0" 00:15:34.269 } 00:15:34.269 ] 00:15:34.269 } 00:15:34.269 ] 00:15:34.269 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:34.269 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1279061 00:15:34.269 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:34.269 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:34.269 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:34.269 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:34.269 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:34.269 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:34.269 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:34.269 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:34.527 [2024-12-05 21:08:42.481859] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.527 Malloc4 00:15:34.527 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:34.785 [2024-12-05 21:08:42.710399] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.786 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:34.786 Asynchronous Event Request test 00:15:34.786 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:34.786 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:34.786 Registering asynchronous event callbacks... 00:15:34.786 Starting namespace attribute notice tests for all controllers... 00:15:34.786 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:34.786 aer_cb - Changed Namespace 00:15:34.786 Cleaning up... 00:15:35.044 [ 00:15:35.044 { 00:15:35.044 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:35.044 "subtype": "Discovery", 00:15:35.044 "listen_addresses": [], 00:15:35.044 "allow_any_host": true, 00:15:35.044 "hosts": [] 00:15:35.044 }, 00:15:35.044 { 00:15:35.044 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:35.044 "subtype": "NVMe", 00:15:35.044 "listen_addresses": [ 00:15:35.044 { 00:15:35.044 "trtype": "VFIOUSER", 00:15:35.044 "adrfam": "IPv4", 00:15:35.044 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:35.044 "trsvcid": "0" 00:15:35.044 } 00:15:35.044 ], 00:15:35.044 "allow_any_host": true, 00:15:35.044 "hosts": [], 00:15:35.044 "serial_number": "SPDK1", 00:15:35.044 "model_number": "SPDK bdev Controller", 00:15:35.044 "max_namespaces": 32, 00:15:35.044 "min_cntlid": 1, 00:15:35.044 "max_cntlid": 65519, 00:15:35.044 "namespaces": [ 00:15:35.044 { 00:15:35.044 "nsid": 1, 00:15:35.044 "bdev_name": "Malloc1", 00:15:35.044 "name": "Malloc1", 00:15:35.044 "nguid": "2500D999EC044B069C8F93A2DF07A427", 00:15:35.044 "uuid": "2500d999-ec04-4b06-9c8f-93a2df07a427" 00:15:35.044 }, 00:15:35.044 { 00:15:35.044 "nsid": 2, 00:15:35.044 "bdev_name": "Malloc3", 00:15:35.044 "name": "Malloc3", 00:15:35.044 "nguid": "2AC88BC5B7FC4C9CBCABD2E927FAAF17", 00:15:35.044 "uuid": "2ac88bc5-b7fc-4c9c-bcab-d2e927faaf17" 00:15:35.044 } 00:15:35.044 ] 00:15:35.044 }, 00:15:35.044 { 00:15:35.044 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:35.044 "subtype": "NVMe", 00:15:35.044 "listen_addresses": [ 00:15:35.044 { 00:15:35.044 "trtype": "VFIOUSER", 00:15:35.044 "adrfam": "IPv4", 00:15:35.044 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:35.044 "trsvcid": "0" 00:15:35.044 } 00:15:35.044 ], 00:15:35.044 "allow_any_host": true, 00:15:35.044 "hosts": [], 00:15:35.044 "serial_number": "SPDK2", 00:15:35.044 "model_number": "SPDK bdev Controller", 00:15:35.044 "max_namespaces": 32, 00:15:35.044 "min_cntlid": 1, 00:15:35.044 "max_cntlid": 65519, 00:15:35.044 "namespaces": [ 00:15:35.044 { 00:15:35.044 "nsid": 1, 00:15:35.044 "bdev_name": "Malloc2", 00:15:35.044 "name": "Malloc2", 00:15:35.044 "nguid": "177B8CE1BDF547F7A09F32CF9D6CD3F0", 00:15:35.044 "uuid": "177b8ce1-bdf5-47f7-a09f-32cf9d6cd3f0" 00:15:35.044 }, 00:15:35.044 { 00:15:35.044 "nsid": 2, 00:15:35.044 "bdev_name": "Malloc4", 00:15:35.044 "name": "Malloc4", 00:15:35.044 "nguid": "7616009B1E5B40C9AFF2C726EDF34921", 00:15:35.044 "uuid": "7616009b-1e5b-40c9-aff2-c726edf34921" 00:15:35.044 } 00:15:35.044 ] 00:15:35.044 } 00:15:35.044 ] 00:15:35.044 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1279061 00:15:35.044 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:35.044 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1271441 00:15:35.044 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1271441 ']' 00:15:35.044 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1271441 00:15:35.044 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:35.044 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:35.044 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1271441 00:15:35.044 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:35.044 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:35.044 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1271441' 00:15:35.044 killing process with pid 1271441 00:15:35.044 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1271441 00:15:35.044 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1271441 00:15:35.303 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:35.303 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:35.303 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:35.303 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:35.303 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:35.303 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1279290 00:15:35.303 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1279290' 00:15:35.303 Process pid: 1279290 00:15:35.303 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:35.303 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:35.303 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1279290 00:15:35.303 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1279290 ']' 00:15:35.303 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.303 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.303 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.303 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.303 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:35.303 [2024-12-05 21:08:43.287706] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:35.303 [2024-12-05 21:08:43.288595] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:15:35.303 [2024-12-05 21:08:43.288634] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.303 [2024-12-05 21:08:43.364180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:35.303 [2024-12-05 21:08:43.405352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.303 [2024-12-05 21:08:43.405390] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.303 [2024-12-05 21:08:43.405397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.303 [2024-12-05 21:08:43.405403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.303 [2024-12-05 21:08:43.405408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.303 [2024-12-05 21:08:43.406863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.303 [2024-12-05 21:08:43.406968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.303 [2024-12-05 21:08:43.407000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.303 [2024-12-05 21:08:43.407001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:35.562 [2024-12-05 21:08:43.476033] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:35.562 [2024-12-05 21:08:43.476548] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:35.562 [2024-12-05 21:08:43.476911] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:35.562 [2024-12-05 21:08:43.477085] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:35.562 [2024-12-05 21:08:43.477139] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:36.128 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.128 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:36.128 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:37.064 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:37.322 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:37.322 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:37.322 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:37.322 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:37.322 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:37.611 Malloc1 00:15:37.611 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:37.868 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:37.868 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:38.125 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:38.125 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:38.125 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:38.381 Malloc2 00:15:38.381 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:38.637 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:38.893 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:38.893 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:38.893 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1279290 00:15:38.893 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1279290 ']' 00:15:38.893 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1279290 00:15:38.893 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:38.893 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.893 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1279290 00:15:38.893 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.893 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.893 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1279290' 00:15:38.893 killing process with pid 1279290 00:15:38.893 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1279290 00:15:38.893 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1279290 00:15:39.151 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:39.151 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:39.151 00:15:39.151 real 0m51.370s 00:15:39.151 user 3m16.589s 00:15:39.151 sys 0m3.194s 00:15:39.151 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.151 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:39.151 ************************************ 00:15:39.151 END TEST nvmf_vfio_user 00:15:39.151 ************************************ 00:15:39.151 21:08:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:39.151 21:08:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:39.151 21:08:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.151 21:08:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:39.410 ************************************ 00:15:39.410 START TEST nvmf_vfio_user_nvme_compliance 00:15:39.410 ************************************ 00:15:39.410 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:39.410 * Looking for test storage... 00:15:39.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:39.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.411 --rc genhtml_branch_coverage=1 00:15:39.411 --rc genhtml_function_coverage=1 00:15:39.411 --rc genhtml_legend=1 00:15:39.411 --rc geninfo_all_blocks=1 00:15:39.411 --rc geninfo_unexecuted_blocks=1 00:15:39.411 00:15:39.411 ' 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:39.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.411 --rc genhtml_branch_coverage=1 00:15:39.411 --rc genhtml_function_coverage=1 00:15:39.411 --rc genhtml_legend=1 00:15:39.411 --rc geninfo_all_blocks=1 00:15:39.411 --rc geninfo_unexecuted_blocks=1 00:15:39.411 00:15:39.411 ' 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:39.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.411 --rc genhtml_branch_coverage=1 00:15:39.411 --rc genhtml_function_coverage=1 00:15:39.411 --rc genhtml_legend=1 00:15:39.411 --rc geninfo_all_blocks=1 00:15:39.411 --rc geninfo_unexecuted_blocks=1 00:15:39.411 00:15:39.411 ' 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:39.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.411 --rc genhtml_branch_coverage=1 00:15:39.411 --rc genhtml_function_coverage=1 00:15:39.411 --rc genhtml_legend=1 00:15:39.411 --rc geninfo_all_blocks=1 00:15:39.411 --rc geninfo_unexecuted_blocks=1 00:15:39.411 00:15:39.411 ' 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.411 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:39.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1280062 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1280062' 00:15:39.412 Process pid: 1280062 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1280062 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1280062 ']' 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.412 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:39.412 [2024-12-05 21:08:47.516190] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:15:39.412 [2024-12-05 21:08:47.516238] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.670 [2024-12-05 21:08:47.588521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:39.670 [2024-12-05 21:08:47.630448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.670 [2024-12-05 21:08:47.630487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.670 [2024-12-05 21:08:47.630494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.670 [2024-12-05 21:08:47.630500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.670 [2024-12-05 21:08:47.630505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.670 [2024-12-05 21:08:47.631929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.670 [2024-12-05 21:08:47.632037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.671 [2024-12-05 21:08:47.632039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.671 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.671 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:39.671 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.047 malloc0 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.047 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:41.047 00:15:41.047 00:15:41.047 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.047 http://cunit.sourceforge.net/ 00:15:41.047 00:15:41.047 00:15:41.047 Suite: nvme_compliance 00:15:41.047 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-05 21:08:48.970101] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.047 [2024-12-05 21:08:48.971452] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:41.047 [2024-12-05 21:08:48.971467] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:41.047 [2024-12-05 21:08:48.971473] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:41.047 [2024-12-05 21:08:48.973129] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.047 passed 00:15:41.047 Test: admin_identify_ctrlr_verify_fused ...[2024-12-05 21:08:49.049676] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.047 [2024-12-05 21:08:49.052690] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.047 passed 00:15:41.047 Test: admin_identify_ns ...[2024-12-05 21:08:49.134652] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.305 [2024-12-05 21:08:49.194387] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:41.305 [2024-12-05 21:08:49.202377] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:41.305 [2024-12-05 21:08:49.223467] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.305 passed 00:15:41.305 Test: admin_get_features_mandatory_features ...[2024-12-05 21:08:49.300309] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.305 [2024-12-05 21:08:49.303327] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.305 passed 00:15:41.305 Test: admin_get_features_optional_features ...[2024-12-05 21:08:49.378844] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.305 [2024-12-05 21:08:49.381867] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.305 passed 00:15:41.563 Test: admin_set_features_number_of_queues ...[2024-12-05 21:08:49.457571] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.563 [2024-12-05 21:08:49.566452] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.563 passed 00:15:41.563 Test: admin_get_log_page_mandatory_logs ...[2024-12-05 21:08:49.639173] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.563 [2024-12-05 21:08:49.644212] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.820 passed 00:15:41.820 Test: admin_get_log_page_with_lpo ...[2024-12-05 21:08:49.719670] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.820 [2024-12-05 21:08:49.791378] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:41.820 [2024-12-05 21:08:49.804455] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.820 passed 00:15:41.820 Test: fabric_property_get ...[2024-12-05 21:08:49.878516] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.820 [2024-12-05 21:08:49.879744] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:41.820 [2024-12-05 21:08:49.881533] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.820 passed 00:15:42.077 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-05 21:08:49.959029] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.077 [2024-12-05 21:08:49.960261] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:42.077 [2024-12-05 21:08:49.962045] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.077 passed 00:15:42.077 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-05 21:08:50.039738] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.077 [2024-12-05 21:08:50.123378] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:42.077 [2024-12-05 21:08:50.139373] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:42.078 [2024-12-05 21:08:50.144516] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.078 passed 00:15:42.335 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-05 21:08:50.222180] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.335 [2024-12-05 21:08:50.223425] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:42.335 [2024-12-05 21:08:50.227217] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.335 passed 00:15:42.335 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-05 21:08:50.302716] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.335 [2024-12-05 21:08:50.382385] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:42.335 [2024-12-05 21:08:50.406375] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:42.335 [2024-12-05 21:08:50.411458] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.335 passed 00:15:42.593 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-05 21:08:50.484363] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.593 [2024-12-05 21:08:50.485606] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:42.593 [2024-12-05 21:08:50.485629] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:42.593 [2024-12-05 21:08:50.490394] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.593 passed 00:15:42.593 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-05 21:08:50.566749] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.593 [2024-12-05 21:08:50.659378] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:42.593 [2024-12-05 21:08:50.667372] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:42.593 [2024-12-05 21:08:50.675379] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:42.593 [2024-12-05 21:08:50.683373] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:42.852 [2024-12-05 21:08:50.712464] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.852 passed 00:15:42.852 Test: admin_create_io_sq_verify_pc ...[2024-12-05 21:08:50.788349] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.852 [2024-12-05 21:08:50.803385] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:42.852 [2024-12-05 21:08:50.820621] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.852 passed 00:15:42.852 Test: admin_create_io_qp_max_qps ...[2024-12-05 21:08:50.896121] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.226 [2024-12-05 21:08:52.013379] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:44.484 [2024-12-05 21:08:52.409654] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.484 passed 00:15:44.484 Test: admin_create_io_sq_shared_cq ...[2024-12-05 21:08:52.484660] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.741 [2024-12-05 21:08:52.620373] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:44.741 [2024-12-05 21:08:52.657422] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.741 passed 00:15:44.741 00:15:44.741 Run Summary: Type Total Ran Passed Failed Inactive 00:15:44.741 suites 1 1 n/a 0 0 00:15:44.741 tests 18 18 18 0 0 00:15:44.741 asserts 360 360 360 0 n/a 00:15:44.741 00:15:44.741 Elapsed time = 1.516 seconds 00:15:44.741 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1280062 00:15:44.741 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1280062 ']' 00:15:44.741 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1280062 00:15:44.741 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:44.741 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:44.741 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1280062 00:15:44.741 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:44.741 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:44.741 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1280062' 00:15:44.741 killing process with pid 1280062 00:15:44.741 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1280062 00:15:44.741 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1280062 00:15:44.999 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:44.999 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:44.999 00:15:44.999 real 0m5.671s 00:15:44.999 user 0m15.851s 00:15:44.999 sys 0m0.515s 00:15:44.999 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.999 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:44.999 ************************************ 00:15:44.999 END TEST nvmf_vfio_user_nvme_compliance 00:15:44.999 ************************************ 00:15:44.999 21:08:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:44.999 21:08:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:44.999 21:08:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.999 21:08:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:44.999 ************************************ 00:15:44.999 START TEST nvmf_vfio_user_fuzz 00:15:44.999 ************************************ 00:15:44.999 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:44.999 * Looking for test storage... 00:15:44.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:44.999 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:44.999 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:15:44.999 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:45.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.259 --rc genhtml_branch_coverage=1 00:15:45.259 --rc genhtml_function_coverage=1 00:15:45.259 --rc genhtml_legend=1 00:15:45.259 --rc geninfo_all_blocks=1 00:15:45.259 --rc geninfo_unexecuted_blocks=1 00:15:45.259 00:15:45.259 ' 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:45.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.259 --rc genhtml_branch_coverage=1 00:15:45.259 --rc genhtml_function_coverage=1 00:15:45.259 --rc genhtml_legend=1 00:15:45.259 --rc geninfo_all_blocks=1 00:15:45.259 --rc geninfo_unexecuted_blocks=1 00:15:45.259 00:15:45.259 ' 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:45.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.259 --rc genhtml_branch_coverage=1 00:15:45.259 --rc genhtml_function_coverage=1 00:15:45.259 --rc genhtml_legend=1 00:15:45.259 --rc geninfo_all_blocks=1 00:15:45.259 --rc geninfo_unexecuted_blocks=1 00:15:45.259 00:15:45.259 ' 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:45.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.259 --rc genhtml_branch_coverage=1 00:15:45.259 --rc genhtml_function_coverage=1 00:15:45.259 --rc genhtml_legend=1 00:15:45.259 --rc geninfo_all_blocks=1 00:15:45.259 --rc geninfo_unexecuted_blocks=1 00:15:45.259 00:15:45.259 ' 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.259 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:45.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1281045 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1281045' 00:15:45.260 Process pid: 1281045 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1281045 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1281045 ']' 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.260 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:45.518 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.518 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:45.518 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:46.452 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:46.452 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.453 malloc0 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:46.453 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:18.618 Fuzzing completed. Shutting down the fuzz application 00:16:18.618 00:16:18.618 Dumping successful admin opcodes: 00:16:18.618 9, 10, 00:16:18.618 Dumping successful io opcodes: 00:16:18.618 0, 00:16:18.618 NS: 0x20000081ef00 I/O qp, Total commands completed: 1116932, total successful commands: 4395, random_seed: 627260224 00:16:18.618 NS: 0x20000081ef00 admin qp, Total commands completed: 273648, total successful commands: 64, random_seed: 240117568 00:16:18.618 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:18.618 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.618 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.618 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.618 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1281045 00:16:18.618 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1281045 ']' 00:16:18.618 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1281045 00:16:18.618 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:18.618 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.618 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1281045 00:16:18.618 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.618 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.618 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1281045' 00:16:18.618 killing process with pid 1281045 00:16:18.618 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1281045 00:16:18.618 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1281045 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:18.618 00:16:18.618 real 0m32.241s 00:16:18.618 user 0m33.556s 00:16:18.618 sys 0m27.539s 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.618 ************************************ 00:16:18.618 END TEST nvmf_vfio_user_fuzz 00:16:18.618 ************************************ 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:18.618 ************************************ 00:16:18.618 START TEST nvmf_auth_target 00:16:18.618 ************************************ 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:18.618 * Looking for test storage... 00:16:18.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:18.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.618 --rc genhtml_branch_coverage=1 00:16:18.618 --rc genhtml_function_coverage=1 00:16:18.618 --rc genhtml_legend=1 00:16:18.618 --rc geninfo_all_blocks=1 00:16:18.618 --rc geninfo_unexecuted_blocks=1 00:16:18.618 00:16:18.618 ' 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:18.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.618 --rc genhtml_branch_coverage=1 00:16:18.618 --rc genhtml_function_coverage=1 00:16:18.618 --rc genhtml_legend=1 00:16:18.618 --rc geninfo_all_blocks=1 00:16:18.618 --rc geninfo_unexecuted_blocks=1 00:16:18.618 00:16:18.618 ' 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:18.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.618 --rc genhtml_branch_coverage=1 00:16:18.618 --rc genhtml_function_coverage=1 00:16:18.618 --rc genhtml_legend=1 00:16:18.618 --rc geninfo_all_blocks=1 00:16:18.618 --rc geninfo_unexecuted_blocks=1 00:16:18.618 00:16:18.618 ' 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:18.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.618 --rc genhtml_branch_coverage=1 00:16:18.618 --rc genhtml_function_coverage=1 00:16:18.618 --rc genhtml_legend=1 00:16:18.618 --rc geninfo_all_blocks=1 00:16:18.618 --rc geninfo_unexecuted_blocks=1 00:16:18.618 00:16:18.618 ' 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.618 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:18.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:18.619 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:23.969 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:23.969 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:23.969 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:23.970 Found net devices under 0000:86:00.0: cvl_0_0 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:23.970 Found net devices under 0000:86:00.1: cvl_0_1 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:23.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:16:23.970 00:16:23.970 --- 10.0.0.2 ping statistics --- 00:16:23.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.970 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:23.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:16:23.970 00:16:23.970 --- 10.0.0.1 ping statistics --- 00:16:23.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.970 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1289888 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1289888 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1289888 ']' 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1289911 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d4c0b66da5742862b96ae2ccb054dbc0a38f40f17d8c99d7 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.b4b 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d4c0b66da5742862b96ae2ccb054dbc0a38f40f17d8c99d7 0 00:16:23.970 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d4c0b66da5742862b96ae2ccb054dbc0a38f40f17d8c99d7 0 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d4c0b66da5742862b96ae2ccb054dbc0a38f40f17d8c99d7 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.b4b 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.b4b 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.b4b 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=62026d6de9331adc7a6670a42da7fd443002631458891def6ea14836fb82c344 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.1Vp 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 62026d6de9331adc7a6670a42da7fd443002631458891def6ea14836fb82c344 3 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 62026d6de9331adc7a6670a42da7fd443002631458891def6ea14836fb82c344 3 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=62026d6de9331adc7a6670a42da7fd443002631458891def6ea14836fb82c344 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.1Vp 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.1Vp 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.1Vp 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=59042844e7ce380bd908409fe991be1a 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.OWm 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 59042844e7ce380bd908409fe991be1a 1 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 59042844e7ce380bd908409fe991be1a 1 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=59042844e7ce380bd908409fe991be1a 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.OWm 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.OWm 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.OWm 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=509da695f230bcfb4d96c45141b6fd7a96b71bcf3cfac037 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.zC7 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 509da695f230bcfb4d96c45141b6fd7a96b71bcf3cfac037 2 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 509da695f230bcfb4d96c45141b6fd7a96b71bcf3cfac037 2 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=509da695f230bcfb4d96c45141b6fd7a96b71bcf3cfac037 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:23.971 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.zC7 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.zC7 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.zC7 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bd09221c3ff07dfbc74ecea8bfb977fa45695f1030eb2db8 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.foT 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bd09221c3ff07dfbc74ecea8bfb977fa45695f1030eb2db8 2 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bd09221c3ff07dfbc74ecea8bfb977fa45695f1030eb2db8 2 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bd09221c3ff07dfbc74ecea8bfb977fa45695f1030eb2db8 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:23.971 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.foT 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.foT 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.foT 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f4186c672cf027af4c057f1b0b27bc8e 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.uXQ 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f4186c672cf027af4c057f1b0b27bc8e 1 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f4186c672cf027af4c057f1b0b27bc8e 1 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f4186c672cf027af4c057f1b0b27bc8e 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.uXQ 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.uXQ 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.uXQ 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b1c6beeefaab529a1732b26568991276e0456d999e3087797c762b99115e87c5 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.jdN 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b1c6beeefaab529a1732b26568991276e0456d999e3087797c762b99115e87c5 3 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b1c6beeefaab529a1732b26568991276e0456d999e3087797c762b99115e87c5 3 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b1c6beeefaab529a1732b26568991276e0456d999e3087797c762b99115e87c5 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.jdN 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.jdN 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.jdN 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1289888 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1289888 ']' 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.230 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.489 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.489 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:24.489 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1289911 /var/tmp/host.sock 00:16:24.489 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1289911 ']' 00:16:24.489 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:24.489 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.489 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:24.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:24.489 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.489 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.747 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.747 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:24.747 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:24.747 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.747 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.747 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.747 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:24.747 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.b4b 00:16:24.747 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.747 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.747 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.747 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.b4b 00:16:24.747 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.b4b 00:16:24.747 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.1Vp ]] 00:16:24.747 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1Vp 00:16:24.747 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.747 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.005 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.005 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1Vp 00:16:25.005 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1Vp 00:16:25.005 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:25.005 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.OWm 00:16:25.005 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.005 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.005 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.005 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.OWm 00:16:25.005 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.OWm 00:16:25.264 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.zC7 ]] 00:16:25.264 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zC7 00:16:25.264 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.264 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.264 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.264 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zC7 00:16:25.264 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zC7 00:16:25.521 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:25.521 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.foT 00:16:25.521 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.521 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.521 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.521 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.foT 00:16:25.521 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.foT 00:16:25.521 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.uXQ ]] 00:16:25.521 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uXQ 00:16:25.521 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.521 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.779 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.779 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uXQ 00:16:25.779 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uXQ 00:16:25.779 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:25.779 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.jdN 00:16:25.779 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.779 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.779 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.779 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.jdN 00:16:25.779 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.jdN 00:16:26.037 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:26.037 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:26.037 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.037 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.037 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:26.037 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:26.295 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:26.295 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.295 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.295 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:26.295 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.295 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.295 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.295 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.295 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.295 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.295 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.295 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.295 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.554 00:16:26.554 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.554 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.554 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.812 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.812 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.812 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.812 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.813 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.813 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.813 { 00:16:26.813 "cntlid": 1, 00:16:26.813 "qid": 0, 00:16:26.813 "state": "enabled", 00:16:26.813 "thread": "nvmf_tgt_poll_group_000", 00:16:26.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:26.813 "listen_address": { 00:16:26.813 "trtype": "TCP", 00:16:26.813 "adrfam": "IPv4", 00:16:26.813 "traddr": "10.0.0.2", 00:16:26.813 "trsvcid": "4420" 00:16:26.813 }, 00:16:26.813 "peer_address": { 00:16:26.813 "trtype": "TCP", 00:16:26.813 "adrfam": "IPv4", 00:16:26.813 "traddr": "10.0.0.1", 00:16:26.813 "trsvcid": "55526" 00:16:26.813 }, 00:16:26.813 "auth": { 00:16:26.813 "state": "completed", 00:16:26.813 "digest": "sha256", 00:16:26.813 "dhgroup": "null" 00:16:26.813 } 00:16:26.813 } 00:16:26.813 ]' 00:16:26.813 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.813 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.813 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.813 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:26.813 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.813 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.813 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.813 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.071 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:16:27.071 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:16:27.637 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.637 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:27.637 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.637 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.637 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.637 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.637 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.637 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.895 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:27.895 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.895 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.895 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:27.895 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:27.895 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.895 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.895 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.895 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.895 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.895 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.895 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.895 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.153 00:16:28.153 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.153 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.153 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.153 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.153 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.154 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.154 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.412 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.412 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.412 { 00:16:28.412 "cntlid": 3, 00:16:28.412 "qid": 0, 00:16:28.412 "state": "enabled", 00:16:28.412 "thread": "nvmf_tgt_poll_group_000", 00:16:28.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:28.412 "listen_address": { 00:16:28.412 "trtype": "TCP", 00:16:28.412 "adrfam": "IPv4", 00:16:28.412 "traddr": "10.0.0.2", 00:16:28.412 "trsvcid": "4420" 00:16:28.412 }, 00:16:28.412 "peer_address": { 00:16:28.412 "trtype": "TCP", 00:16:28.412 "adrfam": "IPv4", 00:16:28.412 "traddr": "10.0.0.1", 00:16:28.412 "trsvcid": "55554" 00:16:28.412 }, 00:16:28.412 "auth": { 00:16:28.412 "state": "completed", 00:16:28.412 "digest": "sha256", 00:16:28.412 "dhgroup": "null" 00:16:28.412 } 00:16:28.412 } 00:16:28.412 ]' 00:16:28.412 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.412 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.412 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.412 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:28.412 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.412 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.412 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.412 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.670 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:16:28.670 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:16:29.236 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.236 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:29.236 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.236 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.236 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.236 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.236 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:29.236 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:29.494 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:29.494 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.494 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.494 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:29.494 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.494 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.494 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.494 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.494 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.494 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.494 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.494 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.494 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.494 00:16:29.753 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.753 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.753 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.753 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.753 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.753 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.753 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.753 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.753 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.753 { 00:16:29.753 "cntlid": 5, 00:16:29.753 "qid": 0, 00:16:29.753 "state": "enabled", 00:16:29.753 "thread": "nvmf_tgt_poll_group_000", 00:16:29.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:29.753 "listen_address": { 00:16:29.753 "trtype": "TCP", 00:16:29.753 "adrfam": "IPv4", 00:16:29.753 "traddr": "10.0.0.2", 00:16:29.753 "trsvcid": "4420" 00:16:29.753 }, 00:16:29.753 "peer_address": { 00:16:29.753 "trtype": "TCP", 00:16:29.753 "adrfam": "IPv4", 00:16:29.753 "traddr": "10.0.0.1", 00:16:29.753 "trsvcid": "55572" 00:16:29.753 }, 00:16:29.753 "auth": { 00:16:29.753 "state": "completed", 00:16:29.753 "digest": "sha256", 00:16:29.753 "dhgroup": "null" 00:16:29.753 } 00:16:29.753 } 00:16:29.753 ]' 00:16:29.753 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.012 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.012 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.012 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:30.012 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.012 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.012 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.012 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.269 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:16:30.269 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.836 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.094 00:16:31.094 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.094 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.094 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.371 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.371 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.371 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.371 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.371 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.371 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.371 { 00:16:31.371 "cntlid": 7, 00:16:31.371 "qid": 0, 00:16:31.371 "state": "enabled", 00:16:31.371 "thread": "nvmf_tgt_poll_group_000", 00:16:31.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:31.371 "listen_address": { 00:16:31.371 "trtype": "TCP", 00:16:31.371 "adrfam": "IPv4", 00:16:31.371 "traddr": "10.0.0.2", 00:16:31.371 "trsvcid": "4420" 00:16:31.371 }, 00:16:31.371 "peer_address": { 00:16:31.371 "trtype": "TCP", 00:16:31.371 "adrfam": "IPv4", 00:16:31.371 "traddr": "10.0.0.1", 00:16:31.371 "trsvcid": "55618" 00:16:31.371 }, 00:16:31.371 "auth": { 00:16:31.371 "state": "completed", 00:16:31.371 "digest": "sha256", 00:16:31.371 "dhgroup": "null" 00:16:31.371 } 00:16:31.371 } 00:16:31.371 ]' 00:16:31.371 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.371 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.371 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.371 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:31.371 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.629 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.629 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.629 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.629 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:16:31.629 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:16:32.194 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.194 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:32.194 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.194 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.194 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.194 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.194 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.194 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.194 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.452 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:32.452 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.452 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.452 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:32.452 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:32.452 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.452 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.452 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.452 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.452 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.452 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.452 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.452 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.709 00:16:32.709 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.709 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.709 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.968 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.968 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.968 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.968 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.968 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.968 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.968 { 00:16:32.968 "cntlid": 9, 00:16:32.968 "qid": 0, 00:16:32.968 "state": "enabled", 00:16:32.968 "thread": "nvmf_tgt_poll_group_000", 00:16:32.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:32.968 "listen_address": { 00:16:32.968 "trtype": "TCP", 00:16:32.968 "adrfam": "IPv4", 00:16:32.968 "traddr": "10.0.0.2", 00:16:32.968 "trsvcid": "4420" 00:16:32.968 }, 00:16:32.968 "peer_address": { 00:16:32.968 "trtype": "TCP", 00:16:32.968 "adrfam": "IPv4", 00:16:32.968 "traddr": "10.0.0.1", 00:16:32.968 "trsvcid": "51544" 00:16:32.968 }, 00:16:32.968 "auth": { 00:16:32.968 "state": "completed", 00:16:32.968 "digest": "sha256", 00:16:32.968 "dhgroup": "ffdhe2048" 00:16:32.968 } 00:16:32.968 } 00:16:32.968 ]' 00:16:32.968 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.968 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.968 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.968 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.968 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.968 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.968 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.968 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.226 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:16:33.226 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:16:33.791 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.791 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:33.791 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.791 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.791 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.791 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.791 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:33.791 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.050 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:34.050 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.050 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.050 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:34.050 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.050 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.050 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.050 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.050 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.050 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.050 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.050 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.050 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.308 00:16:34.308 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.308 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.308 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.566 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.566 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.566 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.566 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.566 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.566 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.566 { 00:16:34.566 "cntlid": 11, 00:16:34.566 "qid": 0, 00:16:34.566 "state": "enabled", 00:16:34.566 "thread": "nvmf_tgt_poll_group_000", 00:16:34.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:34.566 "listen_address": { 00:16:34.566 "trtype": "TCP", 00:16:34.566 "adrfam": "IPv4", 00:16:34.566 "traddr": "10.0.0.2", 00:16:34.566 "trsvcid": "4420" 00:16:34.566 }, 00:16:34.566 "peer_address": { 00:16:34.566 "trtype": "TCP", 00:16:34.566 "adrfam": "IPv4", 00:16:34.566 "traddr": "10.0.0.1", 00:16:34.566 "trsvcid": "51568" 00:16:34.566 }, 00:16:34.566 "auth": { 00:16:34.566 "state": "completed", 00:16:34.566 "digest": "sha256", 00:16:34.566 "dhgroup": "ffdhe2048" 00:16:34.566 } 00:16:34.566 } 00:16:34.566 ]' 00:16:34.566 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.566 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.566 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.566 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.566 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.566 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.566 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.566 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.824 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:16:34.824 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:16:35.390 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.390 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:35.391 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.391 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.391 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.391 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.391 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.391 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.649 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:35.649 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.649 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.649 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:35.649 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:35.649 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.649 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.649 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.649 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.649 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.649 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.649 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.649 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.908 00:16:35.908 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.908 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.908 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.166 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.166 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.166 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.166 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.166 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.166 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.166 { 00:16:36.166 "cntlid": 13, 00:16:36.166 "qid": 0, 00:16:36.166 "state": "enabled", 00:16:36.166 "thread": "nvmf_tgt_poll_group_000", 00:16:36.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:36.166 "listen_address": { 00:16:36.166 "trtype": "TCP", 00:16:36.166 "adrfam": "IPv4", 00:16:36.166 "traddr": "10.0.0.2", 00:16:36.166 "trsvcid": "4420" 00:16:36.166 }, 00:16:36.166 "peer_address": { 00:16:36.166 "trtype": "TCP", 00:16:36.166 "adrfam": "IPv4", 00:16:36.166 "traddr": "10.0.0.1", 00:16:36.166 "trsvcid": "51600" 00:16:36.166 }, 00:16:36.166 "auth": { 00:16:36.166 "state": "completed", 00:16:36.166 "digest": "sha256", 00:16:36.166 "dhgroup": "ffdhe2048" 00:16:36.166 } 00:16:36.166 } 00:16:36.166 ]' 00:16:36.166 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.166 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.167 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.167 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.167 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.167 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.167 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.167 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.425 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:16:36.425 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:16:36.993 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.993 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:36.993 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.993 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.993 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.993 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.993 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:36.993 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:37.251 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:37.251 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.251 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.251 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:37.251 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.251 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.251 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:37.251 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.251 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.251 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.251 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.251 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.251 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.510 00:16:37.510 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.510 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.510 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.510 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.510 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.510 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.510 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.510 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.510 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.510 { 00:16:37.510 "cntlid": 15, 00:16:37.510 "qid": 0, 00:16:37.510 "state": "enabled", 00:16:37.510 "thread": "nvmf_tgt_poll_group_000", 00:16:37.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:37.510 "listen_address": { 00:16:37.510 "trtype": "TCP", 00:16:37.510 "adrfam": "IPv4", 00:16:37.510 "traddr": "10.0.0.2", 00:16:37.510 "trsvcid": "4420" 00:16:37.510 }, 00:16:37.510 "peer_address": { 00:16:37.510 "trtype": "TCP", 00:16:37.510 "adrfam": "IPv4", 00:16:37.510 "traddr": "10.0.0.1", 00:16:37.510 "trsvcid": "51624" 00:16:37.510 }, 00:16:37.510 "auth": { 00:16:37.510 "state": "completed", 00:16:37.510 "digest": "sha256", 00:16:37.510 "dhgroup": "ffdhe2048" 00:16:37.510 } 00:16:37.510 } 00:16:37.510 ]' 00:16:37.510 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.767 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.767 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.767 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:37.767 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.767 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.767 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.767 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.025 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:16:38.025 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:16:38.590 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.590 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:38.590 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.590 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.590 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.590 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.590 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.590 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.590 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.590 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:38.590 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.590 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.590 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:38.590 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.590 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.590 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.590 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.590 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.848 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.848 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.848 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.848 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.849 00:16:39.107 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.107 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.107 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.107 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.107 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.107 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.107 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.107 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.107 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.107 { 00:16:39.107 "cntlid": 17, 00:16:39.107 "qid": 0, 00:16:39.107 "state": "enabled", 00:16:39.107 "thread": "nvmf_tgt_poll_group_000", 00:16:39.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:39.107 "listen_address": { 00:16:39.107 "trtype": "TCP", 00:16:39.107 "adrfam": "IPv4", 00:16:39.107 "traddr": "10.0.0.2", 00:16:39.107 "trsvcid": "4420" 00:16:39.107 }, 00:16:39.107 "peer_address": { 00:16:39.107 "trtype": "TCP", 00:16:39.107 "adrfam": "IPv4", 00:16:39.107 "traddr": "10.0.0.1", 00:16:39.107 "trsvcid": "51642" 00:16:39.107 }, 00:16:39.107 "auth": { 00:16:39.107 "state": "completed", 00:16:39.107 "digest": "sha256", 00:16:39.107 "dhgroup": "ffdhe3072" 00:16:39.107 } 00:16:39.107 } 00:16:39.107 ]' 00:16:39.107 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.365 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.365 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.365 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.365 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.365 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.365 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.365 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.623 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:16:39.623 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:16:40.189 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.189 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:40.189 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.189 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.189 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.189 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.189 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.189 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.189 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:40.189 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.189 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.189 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:40.189 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.189 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.189 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.189 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.189 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.448 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.448 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.448 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.448 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.448 00:16:40.707 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.707 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.707 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.707 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.707 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.707 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.707 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.707 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.707 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.707 { 00:16:40.707 "cntlid": 19, 00:16:40.707 "qid": 0, 00:16:40.707 "state": "enabled", 00:16:40.707 "thread": "nvmf_tgt_poll_group_000", 00:16:40.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:40.707 "listen_address": { 00:16:40.707 "trtype": "TCP", 00:16:40.707 "adrfam": "IPv4", 00:16:40.707 "traddr": "10.0.0.2", 00:16:40.707 "trsvcid": "4420" 00:16:40.707 }, 00:16:40.707 "peer_address": { 00:16:40.707 "trtype": "TCP", 00:16:40.707 "adrfam": "IPv4", 00:16:40.707 "traddr": "10.0.0.1", 00:16:40.707 "trsvcid": "51662" 00:16:40.707 }, 00:16:40.707 "auth": { 00:16:40.707 "state": "completed", 00:16:40.707 "digest": "sha256", 00:16:40.707 "dhgroup": "ffdhe3072" 00:16:40.707 } 00:16:40.707 } 00:16:40.707 ]' 00:16:40.707 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.707 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.965 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.965 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:40.965 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.965 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.965 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.965 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.223 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:16:41.223 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:16:41.789 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.789 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:41.789 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.789 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.789 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.789 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.789 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:41.789 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:41.789 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:41.789 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.789 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.789 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:41.789 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:41.789 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.790 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.790 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.790 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.790 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.790 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.790 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.790 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.047 00:16:42.047 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.047 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.048 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.306 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.306 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.306 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.306 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.306 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.306 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.306 { 00:16:42.306 "cntlid": 21, 00:16:42.306 "qid": 0, 00:16:42.306 "state": "enabled", 00:16:42.306 "thread": "nvmf_tgt_poll_group_000", 00:16:42.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:42.306 "listen_address": { 00:16:42.306 "trtype": "TCP", 00:16:42.306 "adrfam": "IPv4", 00:16:42.306 "traddr": "10.0.0.2", 00:16:42.306 "trsvcid": "4420" 00:16:42.306 }, 00:16:42.306 "peer_address": { 00:16:42.306 "trtype": "TCP", 00:16:42.306 "adrfam": "IPv4", 00:16:42.306 "traddr": "10.0.0.1", 00:16:42.306 "trsvcid": "39096" 00:16:42.306 }, 00:16:42.306 "auth": { 00:16:42.306 "state": "completed", 00:16:42.306 "digest": "sha256", 00:16:42.306 "dhgroup": "ffdhe3072" 00:16:42.306 } 00:16:42.306 } 00:16:42.306 ]' 00:16:42.306 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.306 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.306 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.564 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.564 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.564 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.564 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.564 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.822 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:16:42.822 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:16:43.388 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.388 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:43.388 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.388 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.388 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.388 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.388 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.388 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.388 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:43.388 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.388 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.388 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:43.388 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:43.388 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.389 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:43.389 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.389 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.659 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.659 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:43.659 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.659 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.659 00:16:43.918 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.918 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.918 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.918 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.918 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.918 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.918 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.918 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.918 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.918 { 00:16:43.918 "cntlid": 23, 00:16:43.918 "qid": 0, 00:16:43.918 "state": "enabled", 00:16:43.918 "thread": "nvmf_tgt_poll_group_000", 00:16:43.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:43.918 "listen_address": { 00:16:43.918 "trtype": "TCP", 00:16:43.918 "adrfam": "IPv4", 00:16:43.918 "traddr": "10.0.0.2", 00:16:43.918 "trsvcid": "4420" 00:16:43.918 }, 00:16:43.918 "peer_address": { 00:16:43.918 "trtype": "TCP", 00:16:43.918 "adrfam": "IPv4", 00:16:43.918 "traddr": "10.0.0.1", 00:16:43.918 "trsvcid": "39138" 00:16:43.918 }, 00:16:43.918 "auth": { 00:16:43.918 "state": "completed", 00:16:43.918 "digest": "sha256", 00:16:43.918 "dhgroup": "ffdhe3072" 00:16:43.918 } 00:16:43.918 } 00:16:43.918 ]' 00:16:43.918 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.176 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.176 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.176 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.176 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.176 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.176 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.176 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.435 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:16:44.435 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:16:45.001 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.001 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:45.001 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.001 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.001 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.001 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.001 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.001 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:45.001 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:45.001 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:45.001 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.001 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.001 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:45.001 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:45.001 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.002 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.002 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.002 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.002 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.002 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.002 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.002 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.260 00:16:45.518 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.518 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.518 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.518 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.518 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.518 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.518 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.518 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.518 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.518 { 00:16:45.518 "cntlid": 25, 00:16:45.518 "qid": 0, 00:16:45.518 "state": "enabled", 00:16:45.518 "thread": "nvmf_tgt_poll_group_000", 00:16:45.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:45.518 "listen_address": { 00:16:45.518 "trtype": "TCP", 00:16:45.518 "adrfam": "IPv4", 00:16:45.518 "traddr": "10.0.0.2", 00:16:45.518 "trsvcid": "4420" 00:16:45.518 }, 00:16:45.518 "peer_address": { 00:16:45.518 "trtype": "TCP", 00:16:45.518 "adrfam": "IPv4", 00:16:45.518 "traddr": "10.0.0.1", 00:16:45.518 "trsvcid": "39166" 00:16:45.518 }, 00:16:45.518 "auth": { 00:16:45.518 "state": "completed", 00:16:45.518 "digest": "sha256", 00:16:45.518 "dhgroup": "ffdhe4096" 00:16:45.518 } 00:16:45.518 } 00:16:45.518 ]' 00:16:45.518 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.777 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.777 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.777 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.777 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.777 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.777 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.777 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.035 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:16:46.035 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.604 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.864 00:16:46.864 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.864 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.864 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.122 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.122 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.122 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.122 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.122 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.122 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.122 { 00:16:47.122 "cntlid": 27, 00:16:47.122 "qid": 0, 00:16:47.122 "state": "enabled", 00:16:47.122 "thread": "nvmf_tgt_poll_group_000", 00:16:47.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:47.122 "listen_address": { 00:16:47.122 "trtype": "TCP", 00:16:47.122 "adrfam": "IPv4", 00:16:47.122 "traddr": "10.0.0.2", 00:16:47.122 "trsvcid": "4420" 00:16:47.122 }, 00:16:47.122 "peer_address": { 00:16:47.122 "trtype": "TCP", 00:16:47.122 "adrfam": "IPv4", 00:16:47.122 "traddr": "10.0.0.1", 00:16:47.122 "trsvcid": "39192" 00:16:47.122 }, 00:16:47.122 "auth": { 00:16:47.122 "state": "completed", 00:16:47.122 "digest": "sha256", 00:16:47.122 "dhgroup": "ffdhe4096" 00:16:47.122 } 00:16:47.122 } 00:16:47.122 ]' 00:16:47.122 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.122 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.122 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.380 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.380 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.380 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.380 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.381 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.638 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:16:47.638 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.218 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.477 00:16:48.477 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.477 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.477 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.735 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.736 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.736 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.736 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.736 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.736 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.736 { 00:16:48.736 "cntlid": 29, 00:16:48.736 "qid": 0, 00:16:48.736 "state": "enabled", 00:16:48.736 "thread": "nvmf_tgt_poll_group_000", 00:16:48.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:48.736 "listen_address": { 00:16:48.736 "trtype": "TCP", 00:16:48.736 "adrfam": "IPv4", 00:16:48.736 "traddr": "10.0.0.2", 00:16:48.736 "trsvcid": "4420" 00:16:48.736 }, 00:16:48.736 "peer_address": { 00:16:48.736 "trtype": "TCP", 00:16:48.736 "adrfam": "IPv4", 00:16:48.736 "traddr": "10.0.0.1", 00:16:48.736 "trsvcid": "39218" 00:16:48.736 }, 00:16:48.736 "auth": { 00:16:48.736 "state": "completed", 00:16:48.736 "digest": "sha256", 00:16:48.736 "dhgroup": "ffdhe4096" 00:16:48.736 } 00:16:48.736 } 00:16:48.736 ]' 00:16:48.736 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.736 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.736 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.736 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:48.736 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.994 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.994 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.994 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.994 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:16:48.994 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:16:49.560 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.560 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.560 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.560 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.560 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.560 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.560 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:49.560 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:49.818 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:49.818 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.818 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.818 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:49.818 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:49.818 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.818 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:49.818 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.818 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.818 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.818 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:49.818 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.818 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.076 00:16:50.076 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.076 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.076 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.334 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.334 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.334 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.334 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.334 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.334 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.334 { 00:16:50.334 "cntlid": 31, 00:16:50.334 "qid": 0, 00:16:50.334 "state": "enabled", 00:16:50.334 "thread": "nvmf_tgt_poll_group_000", 00:16:50.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:50.334 "listen_address": { 00:16:50.334 "trtype": "TCP", 00:16:50.334 "adrfam": "IPv4", 00:16:50.334 "traddr": "10.0.0.2", 00:16:50.334 "trsvcid": "4420" 00:16:50.334 }, 00:16:50.334 "peer_address": { 00:16:50.334 "trtype": "TCP", 00:16:50.334 "adrfam": "IPv4", 00:16:50.334 "traddr": "10.0.0.1", 00:16:50.334 "trsvcid": "39248" 00:16:50.334 }, 00:16:50.334 "auth": { 00:16:50.334 "state": "completed", 00:16:50.334 "digest": "sha256", 00:16:50.334 "dhgroup": "ffdhe4096" 00:16:50.334 } 00:16:50.334 } 00:16:50.334 ]' 00:16:50.334 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.334 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.334 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.592 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:50.592 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.592 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.592 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.592 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.592 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:16:50.592 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:16:51.158 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.158 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:51.158 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.158 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.158 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.158 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.158 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.158 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:51.158 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:51.416 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:51.416 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.416 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.416 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:51.416 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:51.416 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.416 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.416 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.416 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.416 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.416 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.416 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.416 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.675 00:16:51.933 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.933 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.933 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.933 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.933 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.933 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.933 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.933 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.933 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.933 { 00:16:51.933 "cntlid": 33, 00:16:51.933 "qid": 0, 00:16:51.933 "state": "enabled", 00:16:51.933 "thread": "nvmf_tgt_poll_group_000", 00:16:51.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:51.933 "listen_address": { 00:16:51.933 "trtype": "TCP", 00:16:51.933 "adrfam": "IPv4", 00:16:51.933 "traddr": "10.0.0.2", 00:16:51.933 "trsvcid": "4420" 00:16:51.933 }, 00:16:51.933 "peer_address": { 00:16:51.933 "trtype": "TCP", 00:16:51.933 "adrfam": "IPv4", 00:16:51.933 "traddr": "10.0.0.1", 00:16:51.933 "trsvcid": "39272" 00:16:51.933 }, 00:16:51.933 "auth": { 00:16:51.933 "state": "completed", 00:16:51.933 "digest": "sha256", 00:16:51.933 "dhgroup": "ffdhe6144" 00:16:51.933 } 00:16:51.933 } 00:16:51.933 ]' 00:16:51.933 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.192 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.192 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.192 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.192 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.192 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.192 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.192 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.450 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:16:52.450 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:16:53.016 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.016 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:53.016 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.016 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.016 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.016 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.016 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.016 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.016 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:53.016 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.016 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.272 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:53.272 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:53.272 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.272 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.272 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.272 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.272 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.272 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.272 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.272 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.529 00:16:53.529 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.529 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.529 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.786 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.786 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.786 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.786 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.786 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.786 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.786 { 00:16:53.786 "cntlid": 35, 00:16:53.786 "qid": 0, 00:16:53.786 "state": "enabled", 00:16:53.786 "thread": "nvmf_tgt_poll_group_000", 00:16:53.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:53.786 "listen_address": { 00:16:53.786 "trtype": "TCP", 00:16:53.786 "adrfam": "IPv4", 00:16:53.786 "traddr": "10.0.0.2", 00:16:53.786 "trsvcid": "4420" 00:16:53.786 }, 00:16:53.786 "peer_address": { 00:16:53.786 "trtype": "TCP", 00:16:53.786 "adrfam": "IPv4", 00:16:53.786 "traddr": "10.0.0.1", 00:16:53.786 "trsvcid": "56096" 00:16:53.786 }, 00:16:53.786 "auth": { 00:16:53.786 "state": "completed", 00:16:53.786 "digest": "sha256", 00:16:53.786 "dhgroup": "ffdhe6144" 00:16:53.786 } 00:16:53.786 } 00:16:53.786 ]' 00:16:53.786 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.786 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.786 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.786 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.786 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.786 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.786 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.786 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.044 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:16:54.044 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:16:54.663 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.663 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:54.663 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.663 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.663 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.663 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.664 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.664 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.921 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:54.921 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.921 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.921 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:54.921 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:54.921 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.921 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.921 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.921 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.921 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.921 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.921 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.921 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.179 00:16:55.179 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.179 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.179 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.437 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.437 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.437 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.437 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.437 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.437 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.437 { 00:16:55.437 "cntlid": 37, 00:16:55.437 "qid": 0, 00:16:55.437 "state": "enabled", 00:16:55.437 "thread": "nvmf_tgt_poll_group_000", 00:16:55.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:55.437 "listen_address": { 00:16:55.437 "trtype": "TCP", 00:16:55.437 "adrfam": "IPv4", 00:16:55.437 "traddr": "10.0.0.2", 00:16:55.437 "trsvcid": "4420" 00:16:55.437 }, 00:16:55.437 "peer_address": { 00:16:55.437 "trtype": "TCP", 00:16:55.437 "adrfam": "IPv4", 00:16:55.437 "traddr": "10.0.0.1", 00:16:55.437 "trsvcid": "56126" 00:16:55.437 }, 00:16:55.437 "auth": { 00:16:55.437 "state": "completed", 00:16:55.437 "digest": "sha256", 00:16:55.437 "dhgroup": "ffdhe6144" 00:16:55.437 } 00:16:55.437 } 00:16:55.437 ]' 00:16:55.437 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.437 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.437 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.437 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.437 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.695 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.695 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.695 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.695 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:16:55.695 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:16:56.261 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.261 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:56.261 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.261 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.261 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.261 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.261 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.261 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.519 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:56.519 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.519 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.519 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:56.519 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:56.519 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.519 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:56.519 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.519 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.519 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.519 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:56.519 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.519 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.085 00:16:57.085 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.085 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.085 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.085 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.085 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.085 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.085 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.085 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.085 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.085 { 00:16:57.085 "cntlid": 39, 00:16:57.085 "qid": 0, 00:16:57.085 "state": "enabled", 00:16:57.085 "thread": "nvmf_tgt_poll_group_000", 00:16:57.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:57.085 "listen_address": { 00:16:57.085 "trtype": "TCP", 00:16:57.085 "adrfam": "IPv4", 00:16:57.085 "traddr": "10.0.0.2", 00:16:57.085 "trsvcid": "4420" 00:16:57.085 }, 00:16:57.085 "peer_address": { 00:16:57.085 "trtype": "TCP", 00:16:57.085 "adrfam": "IPv4", 00:16:57.085 "traddr": "10.0.0.1", 00:16:57.085 "trsvcid": "56156" 00:16:57.085 }, 00:16:57.085 "auth": { 00:16:57.085 "state": "completed", 00:16:57.085 "digest": "sha256", 00:16:57.085 "dhgroup": "ffdhe6144" 00:16:57.085 } 00:16:57.085 } 00:16:57.085 ]' 00:16:57.085 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.085 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.085 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.343 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:57.343 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.343 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.343 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.343 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.343 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:16:57.343 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:16:57.909 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.909 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:57.909 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.909 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.168 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.168 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.168 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.168 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.168 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.168 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:58.168 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.168 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.168 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.168 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:58.168 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.168 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.168 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.168 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.168 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.168 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.168 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.168 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.734 00:16:58.734 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.734 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.734 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.992 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.992 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.992 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.992 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.992 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.992 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.992 { 00:16:58.992 "cntlid": 41, 00:16:58.992 "qid": 0, 00:16:58.992 "state": "enabled", 00:16:58.992 "thread": "nvmf_tgt_poll_group_000", 00:16:58.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:58.992 "listen_address": { 00:16:58.992 "trtype": "TCP", 00:16:58.992 "adrfam": "IPv4", 00:16:58.992 "traddr": "10.0.0.2", 00:16:58.992 "trsvcid": "4420" 00:16:58.992 }, 00:16:58.992 "peer_address": { 00:16:58.992 "trtype": "TCP", 00:16:58.992 "adrfam": "IPv4", 00:16:58.992 "traddr": "10.0.0.1", 00:16:58.992 "trsvcid": "56196" 00:16:58.992 }, 00:16:58.992 "auth": { 00:16:58.992 "state": "completed", 00:16:58.992 "digest": "sha256", 00:16:58.992 "dhgroup": "ffdhe8192" 00:16:58.992 } 00:16:58.992 } 00:16:58.992 ]' 00:16:58.993 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.993 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.993 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.993 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.993 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.993 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.993 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.993 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.251 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:16:59.251 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:16:59.817 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.817 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:59.817 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.817 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.817 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.817 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.817 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.817 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.075 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:00.075 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.075 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:00.075 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:00.075 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:00.075 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.075 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.075 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.075 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.075 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.075 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.075 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.075 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.642 00:17:00.642 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.642 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.642 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.642 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.642 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.642 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.642 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.642 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.642 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.642 { 00:17:00.642 "cntlid": 43, 00:17:00.642 "qid": 0, 00:17:00.642 "state": "enabled", 00:17:00.642 "thread": "nvmf_tgt_poll_group_000", 00:17:00.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:00.642 "listen_address": { 00:17:00.642 "trtype": "TCP", 00:17:00.642 "adrfam": "IPv4", 00:17:00.642 "traddr": "10.0.0.2", 00:17:00.642 "trsvcid": "4420" 00:17:00.642 }, 00:17:00.642 "peer_address": { 00:17:00.642 "trtype": "TCP", 00:17:00.642 "adrfam": "IPv4", 00:17:00.642 "traddr": "10.0.0.1", 00:17:00.642 "trsvcid": "56222" 00:17:00.642 }, 00:17:00.642 "auth": { 00:17:00.642 "state": "completed", 00:17:00.642 "digest": "sha256", 00:17:00.642 "dhgroup": "ffdhe8192" 00:17:00.642 } 00:17:00.642 } 00:17:00.642 ]' 00:17:00.642 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.901 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.901 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.901 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.901 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.901 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.901 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.901 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.159 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:01.159 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:01.725 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.725 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:01.725 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.725 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.725 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.725 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.725 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:01.725 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:01.725 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:01.725 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.725 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:01.725 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:01.725 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:01.725 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.726 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.726 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.726 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.726 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.726 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.726 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.726 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.293 00:17:02.293 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.293 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.293 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.551 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.551 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.551 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.551 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.551 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.551 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.551 { 00:17:02.551 "cntlid": 45, 00:17:02.551 "qid": 0, 00:17:02.551 "state": "enabled", 00:17:02.551 "thread": "nvmf_tgt_poll_group_000", 00:17:02.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:02.551 "listen_address": { 00:17:02.551 "trtype": "TCP", 00:17:02.551 "adrfam": "IPv4", 00:17:02.551 "traddr": "10.0.0.2", 00:17:02.551 "trsvcid": "4420" 00:17:02.551 }, 00:17:02.551 "peer_address": { 00:17:02.551 "trtype": "TCP", 00:17:02.551 "adrfam": "IPv4", 00:17:02.551 "traddr": "10.0.0.1", 00:17:02.551 "trsvcid": "56236" 00:17:02.551 }, 00:17:02.551 "auth": { 00:17:02.551 "state": "completed", 00:17:02.551 "digest": "sha256", 00:17:02.551 "dhgroup": "ffdhe8192" 00:17:02.551 } 00:17:02.551 } 00:17:02.551 ]' 00:17:02.551 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.551 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.551 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.551 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.551 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.551 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.551 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.551 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.809 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:17:02.809 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:17:03.374 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.374 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:03.374 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.374 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.374 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.374 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.374 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.374 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.632 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:03.632 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.632 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.632 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:03.632 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:03.632 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.632 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:03.632 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.632 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.632 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.632 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:03.632 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.632 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.199 00:17:04.199 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.199 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.199 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.457 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.457 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.457 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.457 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.457 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.457 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.457 { 00:17:04.457 "cntlid": 47, 00:17:04.457 "qid": 0, 00:17:04.457 "state": "enabled", 00:17:04.457 "thread": "nvmf_tgt_poll_group_000", 00:17:04.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:04.457 "listen_address": { 00:17:04.457 "trtype": "TCP", 00:17:04.457 "adrfam": "IPv4", 00:17:04.457 "traddr": "10.0.0.2", 00:17:04.457 "trsvcid": "4420" 00:17:04.457 }, 00:17:04.457 "peer_address": { 00:17:04.457 "trtype": "TCP", 00:17:04.457 "adrfam": "IPv4", 00:17:04.457 "traddr": "10.0.0.1", 00:17:04.457 "trsvcid": "39114" 00:17:04.457 }, 00:17:04.458 "auth": { 00:17:04.458 "state": "completed", 00:17:04.458 "digest": "sha256", 00:17:04.458 "dhgroup": "ffdhe8192" 00:17:04.458 } 00:17:04.458 } 00:17:04.458 ]' 00:17:04.458 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.458 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.458 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.458 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.458 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.458 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.458 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.458 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.716 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:17:04.717 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:17:05.283 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.283 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:05.283 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.283 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.283 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.283 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:05.283 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.283 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.283 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.283 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.541 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:05.541 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.541 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.541 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:05.541 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:05.541 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.541 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.541 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.541 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.542 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.542 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.542 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.542 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.810 00:17:05.810 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.810 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.810 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.810 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.811 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.811 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.811 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.811 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.811 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.811 { 00:17:05.811 "cntlid": 49, 00:17:05.811 "qid": 0, 00:17:05.811 "state": "enabled", 00:17:05.811 "thread": "nvmf_tgt_poll_group_000", 00:17:05.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:05.811 "listen_address": { 00:17:05.811 "trtype": "TCP", 00:17:05.811 "adrfam": "IPv4", 00:17:05.811 "traddr": "10.0.0.2", 00:17:05.811 "trsvcid": "4420" 00:17:05.811 }, 00:17:05.811 "peer_address": { 00:17:05.811 "trtype": "TCP", 00:17:05.811 "adrfam": "IPv4", 00:17:05.811 "traddr": "10.0.0.1", 00:17:05.811 "trsvcid": "39136" 00:17:05.811 }, 00:17:05.811 "auth": { 00:17:05.811 "state": "completed", 00:17:05.811 "digest": "sha384", 00:17:05.811 "dhgroup": "null" 00:17:05.811 } 00:17:05.811 } 00:17:05.811 ]' 00:17:05.811 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.074 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.074 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.074 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:06.074 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.074 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.074 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.074 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.333 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:17:06.333 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.900 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.161 00:17:07.161 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.161 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.161 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.419 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.419 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.419 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.419 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.419 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.419 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.419 { 00:17:07.419 "cntlid": 51, 00:17:07.419 "qid": 0, 00:17:07.419 "state": "enabled", 00:17:07.419 "thread": "nvmf_tgt_poll_group_000", 00:17:07.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:07.419 "listen_address": { 00:17:07.419 "trtype": "TCP", 00:17:07.419 "adrfam": "IPv4", 00:17:07.419 "traddr": "10.0.0.2", 00:17:07.419 "trsvcid": "4420" 00:17:07.419 }, 00:17:07.419 "peer_address": { 00:17:07.419 "trtype": "TCP", 00:17:07.419 "adrfam": "IPv4", 00:17:07.419 "traddr": "10.0.0.1", 00:17:07.419 "trsvcid": "39156" 00:17:07.419 }, 00:17:07.419 "auth": { 00:17:07.419 "state": "completed", 00:17:07.419 "digest": "sha384", 00:17:07.419 "dhgroup": "null" 00:17:07.419 } 00:17:07.419 } 00:17:07.419 ]' 00:17:07.419 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.419 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.419 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.676 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:07.676 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.676 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.676 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.676 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.676 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:07.677 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:08.243 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.243 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:08.243 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.243 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.502 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.502 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.502 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.502 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.502 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:08.502 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.502 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.502 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:08.502 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:08.502 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.502 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.502 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.502 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.502 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.502 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.502 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.502 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.761 00:17:08.761 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.761 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.761 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.020 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.020 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.020 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.020 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.020 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.020 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.020 { 00:17:09.020 "cntlid": 53, 00:17:09.020 "qid": 0, 00:17:09.020 "state": "enabled", 00:17:09.020 "thread": "nvmf_tgt_poll_group_000", 00:17:09.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:09.020 "listen_address": { 00:17:09.020 "trtype": "TCP", 00:17:09.020 "adrfam": "IPv4", 00:17:09.020 "traddr": "10.0.0.2", 00:17:09.020 "trsvcid": "4420" 00:17:09.020 }, 00:17:09.020 "peer_address": { 00:17:09.020 "trtype": "TCP", 00:17:09.020 "adrfam": "IPv4", 00:17:09.020 "traddr": "10.0.0.1", 00:17:09.020 "trsvcid": "39198" 00:17:09.020 }, 00:17:09.020 "auth": { 00:17:09.020 "state": "completed", 00:17:09.020 "digest": "sha384", 00:17:09.020 "dhgroup": "null" 00:17:09.020 } 00:17:09.020 } 00:17:09.020 ]' 00:17:09.021 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.021 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.021 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.021 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:09.021 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.279 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.279 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.279 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.279 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:17:09.279 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:17:09.846 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.846 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:09.846 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.846 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.846 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.846 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.846 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:09.846 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:10.106 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:10.106 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.106 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.106 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:10.106 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:10.106 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.106 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:10.106 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.106 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.106 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.106 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:10.106 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.106 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.364 00:17:10.364 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.364 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.364 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.621 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.621 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.621 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.621 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.621 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.621 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.621 { 00:17:10.621 "cntlid": 55, 00:17:10.621 "qid": 0, 00:17:10.621 "state": "enabled", 00:17:10.621 "thread": "nvmf_tgt_poll_group_000", 00:17:10.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:10.621 "listen_address": { 00:17:10.621 "trtype": "TCP", 00:17:10.622 "adrfam": "IPv4", 00:17:10.622 "traddr": "10.0.0.2", 00:17:10.622 "trsvcid": "4420" 00:17:10.622 }, 00:17:10.622 "peer_address": { 00:17:10.622 "trtype": "TCP", 00:17:10.622 "adrfam": "IPv4", 00:17:10.622 "traddr": "10.0.0.1", 00:17:10.622 "trsvcid": "39236" 00:17:10.622 }, 00:17:10.622 "auth": { 00:17:10.622 "state": "completed", 00:17:10.622 "digest": "sha384", 00:17:10.622 "dhgroup": "null" 00:17:10.622 } 00:17:10.622 } 00:17:10.622 ]' 00:17:10.622 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.622 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.622 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.622 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:10.622 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.622 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.622 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.622 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.879 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:17:10.879 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:17:11.443 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.443 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:11.443 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.443 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.443 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.443 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.443 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.443 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.443 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.701 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:11.701 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.701 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.701 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:11.701 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:11.701 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.701 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.701 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.701 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.701 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.701 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.701 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.701 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.959 00:17:11.959 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.959 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.959 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.217 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.217 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.217 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.217 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.217 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.217 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.217 { 00:17:12.217 "cntlid": 57, 00:17:12.217 "qid": 0, 00:17:12.217 "state": "enabled", 00:17:12.217 "thread": "nvmf_tgt_poll_group_000", 00:17:12.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:12.217 "listen_address": { 00:17:12.217 "trtype": "TCP", 00:17:12.217 "adrfam": "IPv4", 00:17:12.217 "traddr": "10.0.0.2", 00:17:12.217 "trsvcid": "4420" 00:17:12.217 }, 00:17:12.217 "peer_address": { 00:17:12.217 "trtype": "TCP", 00:17:12.217 "adrfam": "IPv4", 00:17:12.217 "traddr": "10.0.0.1", 00:17:12.217 "trsvcid": "39260" 00:17:12.217 }, 00:17:12.217 "auth": { 00:17:12.217 "state": "completed", 00:17:12.217 "digest": "sha384", 00:17:12.217 "dhgroup": "ffdhe2048" 00:17:12.217 } 00:17:12.217 } 00:17:12.217 ]' 00:17:12.217 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.217 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.217 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.217 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:12.217 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.217 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.217 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.217 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.476 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:17:12.476 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:17:13.041 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.041 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:13.041 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.041 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.041 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.041 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.041 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:13.041 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:13.299 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:13.299 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.299 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.299 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:13.299 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:13.299 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.299 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.299 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.299 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.299 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.299 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.299 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.299 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.558 00:17:13.558 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.558 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.558 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.816 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.816 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.816 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.816 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.816 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.816 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.816 { 00:17:13.816 "cntlid": 59, 00:17:13.816 "qid": 0, 00:17:13.816 "state": "enabled", 00:17:13.816 "thread": "nvmf_tgt_poll_group_000", 00:17:13.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:13.816 "listen_address": { 00:17:13.816 "trtype": "TCP", 00:17:13.816 "adrfam": "IPv4", 00:17:13.816 "traddr": "10.0.0.2", 00:17:13.816 "trsvcid": "4420" 00:17:13.816 }, 00:17:13.816 "peer_address": { 00:17:13.816 "trtype": "TCP", 00:17:13.816 "adrfam": "IPv4", 00:17:13.816 "traddr": "10.0.0.1", 00:17:13.816 "trsvcid": "49866" 00:17:13.816 }, 00:17:13.816 "auth": { 00:17:13.816 "state": "completed", 00:17:13.816 "digest": "sha384", 00:17:13.816 "dhgroup": "ffdhe2048" 00:17:13.816 } 00:17:13.816 } 00:17:13.816 ]' 00:17:13.816 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.816 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.816 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.816 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:13.816 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.816 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.816 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.816 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.077 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:14.077 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:14.648 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.648 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:14.648 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.648 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.648 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.648 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.648 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.648 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.905 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:14.905 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.905 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.905 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:14.905 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:14.905 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.905 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.905 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.905 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.905 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.905 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.905 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.905 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.164 00:17:15.164 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.164 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.164 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.164 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.164 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.164 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.164 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.164 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.164 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.164 { 00:17:15.164 "cntlid": 61, 00:17:15.164 "qid": 0, 00:17:15.164 "state": "enabled", 00:17:15.164 "thread": "nvmf_tgt_poll_group_000", 00:17:15.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:15.164 "listen_address": { 00:17:15.164 "trtype": "TCP", 00:17:15.164 "adrfam": "IPv4", 00:17:15.164 "traddr": "10.0.0.2", 00:17:15.164 "trsvcid": "4420" 00:17:15.164 }, 00:17:15.164 "peer_address": { 00:17:15.164 "trtype": "TCP", 00:17:15.164 "adrfam": "IPv4", 00:17:15.164 "traddr": "10.0.0.1", 00:17:15.164 "trsvcid": "49888" 00:17:15.164 }, 00:17:15.164 "auth": { 00:17:15.164 "state": "completed", 00:17:15.164 "digest": "sha384", 00:17:15.164 "dhgroup": "ffdhe2048" 00:17:15.164 } 00:17:15.164 } 00:17:15.164 ]' 00:17:15.164 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.164 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.164 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.423 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:15.423 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.423 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.423 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.423 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.682 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:17:15.682 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.249 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.507 00:17:16.507 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.507 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.507 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.765 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.765 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.765 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.765 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.765 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.765 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.765 { 00:17:16.765 "cntlid": 63, 00:17:16.765 "qid": 0, 00:17:16.765 "state": "enabled", 00:17:16.765 "thread": "nvmf_tgt_poll_group_000", 00:17:16.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:16.765 "listen_address": { 00:17:16.765 "trtype": "TCP", 00:17:16.765 "adrfam": "IPv4", 00:17:16.765 "traddr": "10.0.0.2", 00:17:16.765 "trsvcid": "4420" 00:17:16.765 }, 00:17:16.765 "peer_address": { 00:17:16.765 "trtype": "TCP", 00:17:16.765 "adrfam": "IPv4", 00:17:16.765 "traddr": "10.0.0.1", 00:17:16.765 "trsvcid": "49908" 00:17:16.765 }, 00:17:16.765 "auth": { 00:17:16.765 "state": "completed", 00:17:16.765 "digest": "sha384", 00:17:16.765 "dhgroup": "ffdhe2048" 00:17:16.765 } 00:17:16.765 } 00:17:16.765 ]' 00:17:16.765 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.765 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.765 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.023 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:17.023 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.023 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.023 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.023 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.023 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:17:17.023 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:17:17.632 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.632 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:17.632 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.632 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.632 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.632 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.632 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.632 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.632 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.923 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:17.923 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.923 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.923 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:17.923 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:17.923 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.923 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.923 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.923 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.923 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.923 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.923 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.923 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.278 00:17:18.278 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.278 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.278 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.536 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.536 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.536 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.536 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.536 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.536 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.536 { 00:17:18.536 "cntlid": 65, 00:17:18.536 "qid": 0, 00:17:18.536 "state": "enabled", 00:17:18.536 "thread": "nvmf_tgt_poll_group_000", 00:17:18.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:18.536 "listen_address": { 00:17:18.536 "trtype": "TCP", 00:17:18.536 "adrfam": "IPv4", 00:17:18.536 "traddr": "10.0.0.2", 00:17:18.536 "trsvcid": "4420" 00:17:18.536 }, 00:17:18.536 "peer_address": { 00:17:18.536 "trtype": "TCP", 00:17:18.536 "adrfam": "IPv4", 00:17:18.536 "traddr": "10.0.0.1", 00:17:18.536 "trsvcid": "49944" 00:17:18.536 }, 00:17:18.536 "auth": { 00:17:18.536 "state": "completed", 00:17:18.536 "digest": "sha384", 00:17:18.536 "dhgroup": "ffdhe3072" 00:17:18.536 } 00:17:18.536 } 00:17:18.536 ]' 00:17:18.536 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.536 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.536 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.536 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.536 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.536 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.536 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.536 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.793 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:17:18.793 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:17:19.358 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.358 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:19.358 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.358 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.358 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.358 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.358 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.358 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.615 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:19.615 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.615 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.615 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:19.615 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:19.615 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.616 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.616 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.616 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.616 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.616 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.616 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.616 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.873 00:17:19.873 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.873 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.873 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.873 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.873 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.873 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.873 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.873 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.873 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.873 { 00:17:19.873 "cntlid": 67, 00:17:19.873 "qid": 0, 00:17:19.873 "state": "enabled", 00:17:19.873 "thread": "nvmf_tgt_poll_group_000", 00:17:19.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:19.873 "listen_address": { 00:17:19.873 "trtype": "TCP", 00:17:19.873 "adrfam": "IPv4", 00:17:19.873 "traddr": "10.0.0.2", 00:17:19.873 "trsvcid": "4420" 00:17:19.873 }, 00:17:19.873 "peer_address": { 00:17:19.873 "trtype": "TCP", 00:17:19.873 "adrfam": "IPv4", 00:17:19.873 "traddr": "10.0.0.1", 00:17:19.873 "trsvcid": "49978" 00:17:19.873 }, 00:17:19.873 "auth": { 00:17:19.873 "state": "completed", 00:17:19.873 "digest": "sha384", 00:17:19.873 "dhgroup": "ffdhe3072" 00:17:19.873 } 00:17:19.873 } 00:17:19.873 ]' 00:17:19.873 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.131 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.131 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.131 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.131 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.131 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.131 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.131 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.389 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:20.389 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:20.957 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.957 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.957 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.957 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.957 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.957 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.957 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.957 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.957 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:20.957 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.957 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.957 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:20.957 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:20.957 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.957 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.957 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.957 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.957 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.957 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.957 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.957 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.215 00:17:21.473 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.473 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.473 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.473 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.473 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.473 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.473 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.473 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.473 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.473 { 00:17:21.473 "cntlid": 69, 00:17:21.473 "qid": 0, 00:17:21.473 "state": "enabled", 00:17:21.473 "thread": "nvmf_tgt_poll_group_000", 00:17:21.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:21.473 "listen_address": { 00:17:21.473 "trtype": "TCP", 00:17:21.473 "adrfam": "IPv4", 00:17:21.473 "traddr": "10.0.0.2", 00:17:21.473 "trsvcid": "4420" 00:17:21.473 }, 00:17:21.473 "peer_address": { 00:17:21.473 "trtype": "TCP", 00:17:21.473 "adrfam": "IPv4", 00:17:21.473 "traddr": "10.0.0.1", 00:17:21.473 "trsvcid": "50016" 00:17:21.473 }, 00:17:21.473 "auth": { 00:17:21.473 "state": "completed", 00:17:21.473 "digest": "sha384", 00:17:21.473 "dhgroup": "ffdhe3072" 00:17:21.473 } 00:17:21.473 } 00:17:21.473 ]' 00:17:21.473 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.473 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.473 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.731 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:21.731 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.731 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.731 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.731 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.989 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:17:21.989 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:17:22.555 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.555 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:22.555 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.555 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.555 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.555 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.555 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:22.555 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:22.555 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:22.814 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.814 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.814 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:22.814 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:22.814 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.814 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:22.814 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.814 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.814 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.814 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:22.814 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.814 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.814 00:17:23.073 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.073 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.073 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.073 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.073 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.073 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.073 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.073 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.073 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.073 { 00:17:23.073 "cntlid": 71, 00:17:23.073 "qid": 0, 00:17:23.073 "state": "enabled", 00:17:23.073 "thread": "nvmf_tgt_poll_group_000", 00:17:23.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:23.073 "listen_address": { 00:17:23.073 "trtype": "TCP", 00:17:23.073 "adrfam": "IPv4", 00:17:23.073 "traddr": "10.0.0.2", 00:17:23.073 "trsvcid": "4420" 00:17:23.073 }, 00:17:23.073 "peer_address": { 00:17:23.073 "trtype": "TCP", 00:17:23.073 "adrfam": "IPv4", 00:17:23.073 "traddr": "10.0.0.1", 00:17:23.073 "trsvcid": "59588" 00:17:23.073 }, 00:17:23.073 "auth": { 00:17:23.073 "state": "completed", 00:17:23.073 "digest": "sha384", 00:17:23.073 "dhgroup": "ffdhe3072" 00:17:23.073 } 00:17:23.073 } 00:17:23.073 ]' 00:17:23.073 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.331 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.331 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.331 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:23.331 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.331 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.331 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.331 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.589 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:17:23.589 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.155 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.721 00:17:24.721 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.721 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.721 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.721 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.721 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.721 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.721 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.721 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.721 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.721 { 00:17:24.721 "cntlid": 73, 00:17:24.721 "qid": 0, 00:17:24.721 "state": "enabled", 00:17:24.721 "thread": "nvmf_tgt_poll_group_000", 00:17:24.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:24.721 "listen_address": { 00:17:24.721 "trtype": "TCP", 00:17:24.721 "adrfam": "IPv4", 00:17:24.721 "traddr": "10.0.0.2", 00:17:24.721 "trsvcid": "4420" 00:17:24.721 }, 00:17:24.721 "peer_address": { 00:17:24.721 "trtype": "TCP", 00:17:24.721 "adrfam": "IPv4", 00:17:24.721 "traddr": "10.0.0.1", 00:17:24.721 "trsvcid": "59618" 00:17:24.721 }, 00:17:24.721 "auth": { 00:17:24.721 "state": "completed", 00:17:24.721 "digest": "sha384", 00:17:24.721 "dhgroup": "ffdhe4096" 00:17:24.721 } 00:17:24.721 } 00:17:24.721 ]' 00:17:24.721 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.721 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.721 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.980 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:24.980 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.980 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.980 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.980 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.980 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:17:24.980 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:17:25.546 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.547 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:25.547 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.547 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.805 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.805 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.805 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.805 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.805 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:25.805 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.805 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.805 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:25.805 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:25.805 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.805 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.805 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.805 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.805 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.805 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.805 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.805 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.065 00:17:26.065 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.065 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.065 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.324 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.324 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.324 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.324 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.324 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.324 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.324 { 00:17:26.324 "cntlid": 75, 00:17:26.324 "qid": 0, 00:17:26.324 "state": "enabled", 00:17:26.324 "thread": "nvmf_tgt_poll_group_000", 00:17:26.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:26.324 "listen_address": { 00:17:26.324 "trtype": "TCP", 00:17:26.324 "adrfam": "IPv4", 00:17:26.324 "traddr": "10.0.0.2", 00:17:26.324 "trsvcid": "4420" 00:17:26.324 }, 00:17:26.324 "peer_address": { 00:17:26.324 "trtype": "TCP", 00:17:26.324 "adrfam": "IPv4", 00:17:26.324 "traddr": "10.0.0.1", 00:17:26.324 "trsvcid": "59644" 00:17:26.324 }, 00:17:26.324 "auth": { 00:17:26.324 "state": "completed", 00:17:26.324 "digest": "sha384", 00:17:26.324 "dhgroup": "ffdhe4096" 00:17:26.324 } 00:17:26.324 } 00:17:26.324 ]' 00:17:26.324 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.324 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.324 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.324 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:26.324 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.583 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.583 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.583 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.583 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:26.583 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:27.150 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.150 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:27.150 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.150 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.150 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.150 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.150 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:27.150 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:27.408 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:27.408 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.408 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.408 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:27.408 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:27.408 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.408 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.408 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.408 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.408 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.408 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.408 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.408 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.666 00:17:27.666 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.666 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.666 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.924 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.924 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.924 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.924 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.924 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.924 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.924 { 00:17:27.924 "cntlid": 77, 00:17:27.924 "qid": 0, 00:17:27.924 "state": "enabled", 00:17:27.924 "thread": "nvmf_tgt_poll_group_000", 00:17:27.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:27.924 "listen_address": { 00:17:27.924 "trtype": "TCP", 00:17:27.924 "adrfam": "IPv4", 00:17:27.924 "traddr": "10.0.0.2", 00:17:27.924 "trsvcid": "4420" 00:17:27.924 }, 00:17:27.924 "peer_address": { 00:17:27.924 "trtype": "TCP", 00:17:27.924 "adrfam": "IPv4", 00:17:27.924 "traddr": "10.0.0.1", 00:17:27.924 "trsvcid": "59672" 00:17:27.924 }, 00:17:27.924 "auth": { 00:17:27.925 "state": "completed", 00:17:27.925 "digest": "sha384", 00:17:27.925 "dhgroup": "ffdhe4096" 00:17:27.925 } 00:17:27.925 } 00:17:27.925 ]' 00:17:27.925 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.925 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.925 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.925 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.925 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.925 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.925 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.925 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.183 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:17:28.183 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:17:28.795 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.795 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:28.795 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.795 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.795 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.795 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.795 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.795 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:29.052 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:29.052 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.052 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.052 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:29.052 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:29.052 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.052 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:29.052 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.052 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.052 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.052 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:29.052 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.052 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.309 00:17:29.309 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.309 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.309 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.566 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.566 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.566 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.566 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.566 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.566 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.566 { 00:17:29.566 "cntlid": 79, 00:17:29.566 "qid": 0, 00:17:29.566 "state": "enabled", 00:17:29.566 "thread": "nvmf_tgt_poll_group_000", 00:17:29.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:29.566 "listen_address": { 00:17:29.566 "trtype": "TCP", 00:17:29.566 "adrfam": "IPv4", 00:17:29.566 "traddr": "10.0.0.2", 00:17:29.566 "trsvcid": "4420" 00:17:29.566 }, 00:17:29.566 "peer_address": { 00:17:29.566 "trtype": "TCP", 00:17:29.566 "adrfam": "IPv4", 00:17:29.566 "traddr": "10.0.0.1", 00:17:29.566 "trsvcid": "59694" 00:17:29.566 }, 00:17:29.566 "auth": { 00:17:29.566 "state": "completed", 00:17:29.566 "digest": "sha384", 00:17:29.566 "dhgroup": "ffdhe4096" 00:17:29.566 } 00:17:29.566 } 00:17:29.566 ]' 00:17:29.566 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.566 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.566 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.566 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:29.566 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.566 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.566 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.566 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.824 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:17:29.824 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:17:30.388 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.388 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:30.388 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.388 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.388 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.388 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.388 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.388 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.388 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.645 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:30.645 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.645 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.645 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:30.645 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:30.645 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.645 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.645 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.645 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.645 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.645 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.645 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.646 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.904 00:17:30.904 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.904 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.904 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.162 21:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.162 21:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.162 21:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.162 21:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.162 21:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.162 21:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.162 { 00:17:31.162 "cntlid": 81, 00:17:31.162 "qid": 0, 00:17:31.162 "state": "enabled", 00:17:31.162 "thread": "nvmf_tgt_poll_group_000", 00:17:31.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:31.162 "listen_address": { 00:17:31.162 "trtype": "TCP", 00:17:31.162 "adrfam": "IPv4", 00:17:31.162 "traddr": "10.0.0.2", 00:17:31.162 "trsvcid": "4420" 00:17:31.162 }, 00:17:31.162 "peer_address": { 00:17:31.162 "trtype": "TCP", 00:17:31.162 "adrfam": "IPv4", 00:17:31.162 "traddr": "10.0.0.1", 00:17:31.162 "trsvcid": "59736" 00:17:31.162 }, 00:17:31.162 "auth": { 00:17:31.162 "state": "completed", 00:17:31.162 "digest": "sha384", 00:17:31.162 "dhgroup": "ffdhe6144" 00:17:31.162 } 00:17:31.162 } 00:17:31.162 ]' 00:17:31.162 21:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.162 21:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.162 21:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.162 21:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.162 21:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.162 21:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.162 21:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.162 21:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.420 21:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:17:31.420 21:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:17:31.988 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.988 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:31.988 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.988 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.988 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.988 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.988 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.988 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.247 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:32.247 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.247 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.247 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:32.247 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:32.247 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.247 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.247 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.247 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.247 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.247 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.247 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.247 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.505 00:17:32.505 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.505 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.505 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.763 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.763 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.763 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.763 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.763 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.763 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.763 { 00:17:32.763 "cntlid": 83, 00:17:32.763 "qid": 0, 00:17:32.763 "state": "enabled", 00:17:32.763 "thread": "nvmf_tgt_poll_group_000", 00:17:32.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:32.763 "listen_address": { 00:17:32.763 "trtype": "TCP", 00:17:32.763 "adrfam": "IPv4", 00:17:32.763 "traddr": "10.0.0.2", 00:17:32.763 "trsvcid": "4420" 00:17:32.763 }, 00:17:32.763 "peer_address": { 00:17:32.764 "trtype": "TCP", 00:17:32.764 "adrfam": "IPv4", 00:17:32.764 "traddr": "10.0.0.1", 00:17:32.764 "trsvcid": "48796" 00:17:32.764 }, 00:17:32.764 "auth": { 00:17:32.764 "state": "completed", 00:17:32.764 "digest": "sha384", 00:17:32.764 "dhgroup": "ffdhe6144" 00:17:32.764 } 00:17:32.764 } 00:17:32.764 ]' 00:17:32.764 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.764 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.764 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.021 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:33.021 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.021 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.021 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.021 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.279 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:33.279 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:33.844 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.844 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:33.844 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.844 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.844 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.844 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.844 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.844 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.844 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:33.844 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.844 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.845 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:33.845 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:33.845 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.845 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.845 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.845 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.845 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.845 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.845 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.845 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.412 00:17:34.412 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.412 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.412 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.412 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.412 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.412 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.412 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.412 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.412 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.412 { 00:17:34.412 "cntlid": 85, 00:17:34.412 "qid": 0, 00:17:34.412 "state": "enabled", 00:17:34.412 "thread": "nvmf_tgt_poll_group_000", 00:17:34.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:34.412 "listen_address": { 00:17:34.412 "trtype": "TCP", 00:17:34.412 "adrfam": "IPv4", 00:17:34.412 "traddr": "10.0.0.2", 00:17:34.412 "trsvcid": "4420" 00:17:34.412 }, 00:17:34.412 "peer_address": { 00:17:34.412 "trtype": "TCP", 00:17:34.412 "adrfam": "IPv4", 00:17:34.412 "traddr": "10.0.0.1", 00:17:34.412 "trsvcid": "48816" 00:17:34.412 }, 00:17:34.412 "auth": { 00:17:34.412 "state": "completed", 00:17:34.412 "digest": "sha384", 00:17:34.412 "dhgroup": "ffdhe6144" 00:17:34.412 } 00:17:34.412 } 00:17:34.412 ]' 00:17:34.412 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.412 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.412 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.670 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.670 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.670 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.670 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.670 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.928 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:17:34.928 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.494 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.061 00:17:36.061 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.061 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.061 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.061 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.061 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.061 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.061 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.061 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.061 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.061 { 00:17:36.061 "cntlid": 87, 00:17:36.061 "qid": 0, 00:17:36.061 "state": "enabled", 00:17:36.061 "thread": "nvmf_tgt_poll_group_000", 00:17:36.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:36.061 "listen_address": { 00:17:36.061 "trtype": "TCP", 00:17:36.061 "adrfam": "IPv4", 00:17:36.061 "traddr": "10.0.0.2", 00:17:36.061 "trsvcid": "4420" 00:17:36.061 }, 00:17:36.061 "peer_address": { 00:17:36.061 "trtype": "TCP", 00:17:36.061 "adrfam": "IPv4", 00:17:36.061 "traddr": "10.0.0.1", 00:17:36.061 "trsvcid": "48840" 00:17:36.061 }, 00:17:36.061 "auth": { 00:17:36.061 "state": "completed", 00:17:36.061 "digest": "sha384", 00:17:36.061 "dhgroup": "ffdhe6144" 00:17:36.061 } 00:17:36.061 } 00:17:36.061 ]' 00:17:36.061 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.061 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.061 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.319 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:36.319 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.319 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.319 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.319 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.592 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:17:36.592 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:17:37.158 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.158 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.723 00:17:37.723 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.723 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.723 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.981 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.981 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.981 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.981 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.981 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.981 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.981 { 00:17:37.981 "cntlid": 89, 00:17:37.981 "qid": 0, 00:17:37.981 "state": "enabled", 00:17:37.981 "thread": "nvmf_tgt_poll_group_000", 00:17:37.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:37.981 "listen_address": { 00:17:37.981 "trtype": "TCP", 00:17:37.981 "adrfam": "IPv4", 00:17:37.981 "traddr": "10.0.0.2", 00:17:37.981 "trsvcid": "4420" 00:17:37.981 }, 00:17:37.981 "peer_address": { 00:17:37.981 "trtype": "TCP", 00:17:37.981 "adrfam": "IPv4", 00:17:37.981 "traddr": "10.0.0.1", 00:17:37.981 "trsvcid": "48858" 00:17:37.981 }, 00:17:37.981 "auth": { 00:17:37.981 "state": "completed", 00:17:37.981 "digest": "sha384", 00:17:37.981 "dhgroup": "ffdhe8192" 00:17:37.981 } 00:17:37.981 } 00:17:37.981 ]' 00:17:37.981 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.981 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.981 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.981 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.981 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.981 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.981 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.981 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.240 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:17:38.240 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:17:38.806 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.807 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:38.807 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.807 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.807 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.807 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.807 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.807 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:39.064 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:39.064 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.064 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.064 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:39.064 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.064 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.064 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.064 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.064 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.064 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.064 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.064 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.064 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.630 00:17:39.630 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.630 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.630 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.630 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.630 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.630 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.630 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.630 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.630 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.630 { 00:17:39.630 "cntlid": 91, 00:17:39.630 "qid": 0, 00:17:39.630 "state": "enabled", 00:17:39.630 "thread": "nvmf_tgt_poll_group_000", 00:17:39.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:39.630 "listen_address": { 00:17:39.630 "trtype": "TCP", 00:17:39.630 "adrfam": "IPv4", 00:17:39.630 "traddr": "10.0.0.2", 00:17:39.630 "trsvcid": "4420" 00:17:39.630 }, 00:17:39.630 "peer_address": { 00:17:39.630 "trtype": "TCP", 00:17:39.630 "adrfam": "IPv4", 00:17:39.630 "traddr": "10.0.0.1", 00:17:39.630 "trsvcid": "48884" 00:17:39.630 }, 00:17:39.630 "auth": { 00:17:39.630 "state": "completed", 00:17:39.630 "digest": "sha384", 00:17:39.630 "dhgroup": "ffdhe8192" 00:17:39.630 } 00:17:39.630 } 00:17:39.630 ]' 00:17:39.630 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.630 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.630 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.889 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.889 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.889 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.889 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.889 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.147 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:40.147 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.710 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.275 00:17:41.275 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.275 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.275 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.532 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.532 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.532 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.532 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.532 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.532 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.532 { 00:17:41.532 "cntlid": 93, 00:17:41.532 "qid": 0, 00:17:41.532 "state": "enabled", 00:17:41.532 "thread": "nvmf_tgt_poll_group_000", 00:17:41.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:41.532 "listen_address": { 00:17:41.532 "trtype": "TCP", 00:17:41.532 "adrfam": "IPv4", 00:17:41.532 "traddr": "10.0.0.2", 00:17:41.532 "trsvcid": "4420" 00:17:41.532 }, 00:17:41.532 "peer_address": { 00:17:41.532 "trtype": "TCP", 00:17:41.532 "adrfam": "IPv4", 00:17:41.532 "traddr": "10.0.0.1", 00:17:41.532 "trsvcid": "48918" 00:17:41.532 }, 00:17:41.532 "auth": { 00:17:41.532 "state": "completed", 00:17:41.532 "digest": "sha384", 00:17:41.532 "dhgroup": "ffdhe8192" 00:17:41.532 } 00:17:41.533 } 00:17:41.533 ]' 00:17:41.533 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.533 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.533 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.533 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.533 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.533 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.533 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.533 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.791 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:17:41.791 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:17:42.358 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.358 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:42.358 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.358 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.358 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.358 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.358 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.358 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.616 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:42.616 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.616 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.616 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:42.616 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.616 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.616 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:42.616 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.616 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.616 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.616 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.616 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.616 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.182 00:17:43.183 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.183 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.183 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.183 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.183 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.183 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.183 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.183 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.183 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.183 { 00:17:43.183 "cntlid": 95, 00:17:43.183 "qid": 0, 00:17:43.183 "state": "enabled", 00:17:43.183 "thread": "nvmf_tgt_poll_group_000", 00:17:43.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:43.183 "listen_address": { 00:17:43.183 "trtype": "TCP", 00:17:43.183 "adrfam": "IPv4", 00:17:43.183 "traddr": "10.0.0.2", 00:17:43.183 "trsvcid": "4420" 00:17:43.183 }, 00:17:43.183 "peer_address": { 00:17:43.183 "trtype": "TCP", 00:17:43.183 "adrfam": "IPv4", 00:17:43.183 "traddr": "10.0.0.1", 00:17:43.183 "trsvcid": "33930" 00:17:43.183 }, 00:17:43.183 "auth": { 00:17:43.183 "state": "completed", 00:17:43.183 "digest": "sha384", 00:17:43.183 "dhgroup": "ffdhe8192" 00:17:43.183 } 00:17:43.183 } 00:17:43.183 ]' 00:17:43.183 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.440 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.440 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.440 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.440 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.440 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.440 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.440 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.698 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:17:43.698 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.265 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.524 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.524 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.524 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.524 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.524 00:17:44.524 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.524 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.524 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.782 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.782 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.782 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.782 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.782 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.782 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.782 { 00:17:44.782 "cntlid": 97, 00:17:44.782 "qid": 0, 00:17:44.782 "state": "enabled", 00:17:44.782 "thread": "nvmf_tgt_poll_group_000", 00:17:44.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:44.782 "listen_address": { 00:17:44.782 "trtype": "TCP", 00:17:44.782 "adrfam": "IPv4", 00:17:44.782 "traddr": "10.0.0.2", 00:17:44.782 "trsvcid": "4420" 00:17:44.782 }, 00:17:44.782 "peer_address": { 00:17:44.782 "trtype": "TCP", 00:17:44.782 "adrfam": "IPv4", 00:17:44.782 "traddr": "10.0.0.1", 00:17:44.782 "trsvcid": "33964" 00:17:44.782 }, 00:17:44.782 "auth": { 00:17:44.782 "state": "completed", 00:17:44.782 "digest": "sha512", 00:17:44.782 "dhgroup": "null" 00:17:44.782 } 00:17:44.782 } 00:17:44.782 ]' 00:17:44.782 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.782 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.782 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.040 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:45.040 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.040 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.040 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.040 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.298 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:17:45.298 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.866 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.124 00:17:46.124 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.124 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.124 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.382 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.382 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.382 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.382 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.382 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.382 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.382 { 00:17:46.382 "cntlid": 99, 00:17:46.382 "qid": 0, 00:17:46.382 "state": "enabled", 00:17:46.382 "thread": "nvmf_tgt_poll_group_000", 00:17:46.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:46.382 "listen_address": { 00:17:46.382 "trtype": "TCP", 00:17:46.382 "adrfam": "IPv4", 00:17:46.382 "traddr": "10.0.0.2", 00:17:46.382 "trsvcid": "4420" 00:17:46.382 }, 00:17:46.382 "peer_address": { 00:17:46.382 "trtype": "TCP", 00:17:46.382 "adrfam": "IPv4", 00:17:46.382 "traddr": "10.0.0.1", 00:17:46.382 "trsvcid": "33996" 00:17:46.382 }, 00:17:46.382 "auth": { 00:17:46.382 "state": "completed", 00:17:46.382 "digest": "sha512", 00:17:46.382 "dhgroup": "null" 00:17:46.382 } 00:17:46.382 } 00:17:46.382 ]' 00:17:46.382 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.382 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.382 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.382 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:46.382 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.640 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.640 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.640 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.640 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:46.640 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:47.211 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.211 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:47.211 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.211 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.211 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.211 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.211 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.211 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.468 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:47.468 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.468 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.468 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:47.468 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:47.468 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.468 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.468 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.468 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.468 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.468 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.468 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.468 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.725 00:17:47.725 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.725 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.725 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.029 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.029 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.029 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.029 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.029 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.029 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.029 { 00:17:48.029 "cntlid": 101, 00:17:48.029 "qid": 0, 00:17:48.029 "state": "enabled", 00:17:48.029 "thread": "nvmf_tgt_poll_group_000", 00:17:48.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:48.029 "listen_address": { 00:17:48.029 "trtype": "TCP", 00:17:48.029 "adrfam": "IPv4", 00:17:48.029 "traddr": "10.0.0.2", 00:17:48.029 "trsvcid": "4420" 00:17:48.029 }, 00:17:48.029 "peer_address": { 00:17:48.029 "trtype": "TCP", 00:17:48.029 "adrfam": "IPv4", 00:17:48.029 "traddr": "10.0.0.1", 00:17:48.029 "trsvcid": "34036" 00:17:48.029 }, 00:17:48.029 "auth": { 00:17:48.029 "state": "completed", 00:17:48.029 "digest": "sha512", 00:17:48.029 "dhgroup": "null" 00:17:48.029 } 00:17:48.029 } 00:17:48.029 ]' 00:17:48.029 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.029 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.029 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.029 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:48.029 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.029 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.029 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.029 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.312 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:17:48.312 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:17:48.879 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.879 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:48.879 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.879 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.879 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.879 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.879 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.879 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:49.139 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:49.139 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.139 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.139 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:49.139 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:49.139 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.139 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:49.139 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.139 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.139 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.139 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.139 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.139 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.398 00:17:49.398 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.398 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.398 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.398 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.398 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.398 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.398 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.398 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.398 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.398 { 00:17:49.398 "cntlid": 103, 00:17:49.398 "qid": 0, 00:17:49.398 "state": "enabled", 00:17:49.398 "thread": "nvmf_tgt_poll_group_000", 00:17:49.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:49.398 "listen_address": { 00:17:49.398 "trtype": "TCP", 00:17:49.398 "adrfam": "IPv4", 00:17:49.398 "traddr": "10.0.0.2", 00:17:49.398 "trsvcid": "4420" 00:17:49.398 }, 00:17:49.398 "peer_address": { 00:17:49.398 "trtype": "TCP", 00:17:49.398 "adrfam": "IPv4", 00:17:49.398 "traddr": "10.0.0.1", 00:17:49.398 "trsvcid": "34072" 00:17:49.398 }, 00:17:49.398 "auth": { 00:17:49.398 "state": "completed", 00:17:49.398 "digest": "sha512", 00:17:49.398 "dhgroup": "null" 00:17:49.398 } 00:17:49.398 } 00:17:49.398 ]' 00:17:49.398 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.398 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.398 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.657 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:49.657 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.657 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.657 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.657 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.915 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:17:49.915 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.482 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.741 00:17:50.741 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.741 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.741 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.000 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.000 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.000 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.000 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.000 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.000 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.000 { 00:17:51.000 "cntlid": 105, 00:17:51.000 "qid": 0, 00:17:51.000 "state": "enabled", 00:17:51.000 "thread": "nvmf_tgt_poll_group_000", 00:17:51.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:51.000 "listen_address": { 00:17:51.000 "trtype": "TCP", 00:17:51.000 "adrfam": "IPv4", 00:17:51.000 "traddr": "10.0.0.2", 00:17:51.000 "trsvcid": "4420" 00:17:51.000 }, 00:17:51.000 "peer_address": { 00:17:51.000 "trtype": "TCP", 00:17:51.000 "adrfam": "IPv4", 00:17:51.000 "traddr": "10.0.0.1", 00:17:51.000 "trsvcid": "34110" 00:17:51.000 }, 00:17:51.000 "auth": { 00:17:51.000 "state": "completed", 00:17:51.000 "digest": "sha512", 00:17:51.000 "dhgroup": "ffdhe2048" 00:17:51.000 } 00:17:51.000 } 00:17:51.000 ]' 00:17:51.000 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.000 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.000 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.258 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:51.258 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.258 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.258 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.258 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.258 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:17:51.258 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:17:51.825 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.825 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:51.825 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.825 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.825 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.825 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.825 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.825 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:52.083 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:52.083 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.083 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.083 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:52.083 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:52.083 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.083 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.083 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.083 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.083 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.083 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.083 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.083 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.341 00:17:52.341 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.341 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.341 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.599 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.599 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.599 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.599 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.599 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.599 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.599 { 00:17:52.599 "cntlid": 107, 00:17:52.599 "qid": 0, 00:17:52.599 "state": "enabled", 00:17:52.599 "thread": "nvmf_tgt_poll_group_000", 00:17:52.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:52.599 "listen_address": { 00:17:52.599 "trtype": "TCP", 00:17:52.599 "adrfam": "IPv4", 00:17:52.599 "traddr": "10.0.0.2", 00:17:52.599 "trsvcid": "4420" 00:17:52.599 }, 00:17:52.599 "peer_address": { 00:17:52.599 "trtype": "TCP", 00:17:52.599 "adrfam": "IPv4", 00:17:52.599 "traddr": "10.0.0.1", 00:17:52.599 "trsvcid": "54338" 00:17:52.599 }, 00:17:52.599 "auth": { 00:17:52.599 "state": "completed", 00:17:52.599 "digest": "sha512", 00:17:52.599 "dhgroup": "ffdhe2048" 00:17:52.599 } 00:17:52.599 } 00:17:52.599 ]' 00:17:52.600 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.600 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.600 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.600 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:52.600 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.858 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.858 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.858 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.858 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:52.858 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:53.425 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.425 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:53.425 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.425 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.425 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.425 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.425 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.425 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.684 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:53.684 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.684 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.684 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:53.684 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:53.684 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.684 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.684 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.684 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.684 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.684 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.684 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.684 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.943 00:17:53.943 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.943 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.943 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.202 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.202 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.202 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.202 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.202 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.202 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.202 { 00:17:54.202 "cntlid": 109, 00:17:54.202 "qid": 0, 00:17:54.202 "state": "enabled", 00:17:54.202 "thread": "nvmf_tgt_poll_group_000", 00:17:54.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:54.202 "listen_address": { 00:17:54.202 "trtype": "TCP", 00:17:54.202 "adrfam": "IPv4", 00:17:54.202 "traddr": "10.0.0.2", 00:17:54.202 "trsvcid": "4420" 00:17:54.202 }, 00:17:54.202 "peer_address": { 00:17:54.202 "trtype": "TCP", 00:17:54.202 "adrfam": "IPv4", 00:17:54.202 "traddr": "10.0.0.1", 00:17:54.202 "trsvcid": "54370" 00:17:54.202 }, 00:17:54.202 "auth": { 00:17:54.202 "state": "completed", 00:17:54.202 "digest": "sha512", 00:17:54.202 "dhgroup": "ffdhe2048" 00:17:54.202 } 00:17:54.202 } 00:17:54.202 ]' 00:17:54.202 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.202 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.202 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.202 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:54.202 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.202 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.202 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.202 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.462 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:17:54.462 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:17:55.030 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.030 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:55.030 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.030 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.030 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.030 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.030 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.030 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.289 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:55.289 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.289 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.289 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:55.289 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:55.289 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.289 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:55.289 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.289 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.289 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.289 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:55.289 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.289 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.548 00:17:55.548 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.548 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.548 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.807 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.807 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.807 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.807 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.807 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.807 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.807 { 00:17:55.807 "cntlid": 111, 00:17:55.807 "qid": 0, 00:17:55.807 "state": "enabled", 00:17:55.807 "thread": "nvmf_tgt_poll_group_000", 00:17:55.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:55.807 "listen_address": { 00:17:55.807 "trtype": "TCP", 00:17:55.807 "adrfam": "IPv4", 00:17:55.807 "traddr": "10.0.0.2", 00:17:55.807 "trsvcid": "4420" 00:17:55.807 }, 00:17:55.807 "peer_address": { 00:17:55.807 "trtype": "TCP", 00:17:55.807 "adrfam": "IPv4", 00:17:55.807 "traddr": "10.0.0.1", 00:17:55.807 "trsvcid": "54392" 00:17:55.807 }, 00:17:55.807 "auth": { 00:17:55.807 "state": "completed", 00:17:55.807 "digest": "sha512", 00:17:55.807 "dhgroup": "ffdhe2048" 00:17:55.807 } 00:17:55.807 } 00:17:55.807 ]' 00:17:55.807 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.807 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.807 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.807 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:55.807 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.807 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.807 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.807 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.066 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:17:56.066 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:17:56.633 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.633 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:56.633 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.633 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.633 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.633 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.633 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.633 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.633 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.892 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:56.892 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.892 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.892 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:56.892 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:56.892 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.892 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.892 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.892 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.892 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.892 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.892 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.892 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.155 00:17:57.155 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.155 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.155 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.416 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.416 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.416 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.416 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.416 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.416 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.416 { 00:17:57.416 "cntlid": 113, 00:17:57.416 "qid": 0, 00:17:57.416 "state": "enabled", 00:17:57.416 "thread": "nvmf_tgt_poll_group_000", 00:17:57.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:57.416 "listen_address": { 00:17:57.416 "trtype": "TCP", 00:17:57.416 "adrfam": "IPv4", 00:17:57.416 "traddr": "10.0.0.2", 00:17:57.416 "trsvcid": "4420" 00:17:57.416 }, 00:17:57.416 "peer_address": { 00:17:57.416 "trtype": "TCP", 00:17:57.416 "adrfam": "IPv4", 00:17:57.416 "traddr": "10.0.0.1", 00:17:57.416 "trsvcid": "54422" 00:17:57.416 }, 00:17:57.416 "auth": { 00:17:57.416 "state": "completed", 00:17:57.416 "digest": "sha512", 00:17:57.416 "dhgroup": "ffdhe3072" 00:17:57.416 } 00:17:57.416 } 00:17:57.416 ]' 00:17:57.416 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.416 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.416 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.416 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:57.416 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.416 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.416 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.416 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.674 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:17:57.674 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:17:58.241 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.241 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:58.241 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.241 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.241 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.241 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.241 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.241 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.500 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:58.500 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.500 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.500 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:58.500 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:58.500 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.500 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.500 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.500 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.500 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.500 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.500 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.500 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.759 00:17:58.759 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.759 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.759 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.018 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.018 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.018 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.018 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.018 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.018 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.018 { 00:17:59.018 "cntlid": 115, 00:17:59.018 "qid": 0, 00:17:59.018 "state": "enabled", 00:17:59.018 "thread": "nvmf_tgt_poll_group_000", 00:17:59.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:59.018 "listen_address": { 00:17:59.018 "trtype": "TCP", 00:17:59.018 "adrfam": "IPv4", 00:17:59.018 "traddr": "10.0.0.2", 00:17:59.018 "trsvcid": "4420" 00:17:59.018 }, 00:17:59.018 "peer_address": { 00:17:59.018 "trtype": "TCP", 00:17:59.018 "adrfam": "IPv4", 00:17:59.018 "traddr": "10.0.0.1", 00:17:59.018 "trsvcid": "54462" 00:17:59.018 }, 00:17:59.018 "auth": { 00:17:59.018 "state": "completed", 00:17:59.018 "digest": "sha512", 00:17:59.018 "dhgroup": "ffdhe3072" 00:17:59.018 } 00:17:59.018 } 00:17:59.018 ]' 00:17:59.018 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.018 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.018 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.018 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:59.018 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.018 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.018 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.018 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.277 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:59.277 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:17:59.844 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.844 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:59.844 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.844 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.844 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.844 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.844 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:59.844 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.103 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:00.103 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.103 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.103 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:00.103 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.103 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.103 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.103 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.103 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.103 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.103 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.103 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.103 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.362 00:18:00.362 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.362 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.362 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.362 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.362 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.362 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.362 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.362 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.362 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.362 { 00:18:00.362 "cntlid": 117, 00:18:00.362 "qid": 0, 00:18:00.362 "state": "enabled", 00:18:00.362 "thread": "nvmf_tgt_poll_group_000", 00:18:00.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:00.362 "listen_address": { 00:18:00.362 "trtype": "TCP", 00:18:00.362 "adrfam": "IPv4", 00:18:00.362 "traddr": "10.0.0.2", 00:18:00.362 "trsvcid": "4420" 00:18:00.362 }, 00:18:00.362 "peer_address": { 00:18:00.362 "trtype": "TCP", 00:18:00.362 "adrfam": "IPv4", 00:18:00.362 "traddr": "10.0.0.1", 00:18:00.362 "trsvcid": "54500" 00:18:00.362 }, 00:18:00.362 "auth": { 00:18:00.362 "state": "completed", 00:18:00.362 "digest": "sha512", 00:18:00.362 "dhgroup": "ffdhe3072" 00:18:00.362 } 00:18:00.362 } 00:18:00.362 ]' 00:18:00.362 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.621 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.621 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.621 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:00.621 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.621 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.621 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.621 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.879 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:18:00.879 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.447 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.705 00:18:01.705 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.705 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.705 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.964 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.964 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.964 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.964 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.964 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.964 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.964 { 00:18:01.965 "cntlid": 119, 00:18:01.965 "qid": 0, 00:18:01.965 "state": "enabled", 00:18:01.965 "thread": "nvmf_tgt_poll_group_000", 00:18:01.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:01.965 "listen_address": { 00:18:01.965 "trtype": "TCP", 00:18:01.965 "adrfam": "IPv4", 00:18:01.965 "traddr": "10.0.0.2", 00:18:01.965 "trsvcid": "4420" 00:18:01.965 }, 00:18:01.965 "peer_address": { 00:18:01.965 "trtype": "TCP", 00:18:01.965 "adrfam": "IPv4", 00:18:01.965 "traddr": "10.0.0.1", 00:18:01.965 "trsvcid": "54534" 00:18:01.965 }, 00:18:01.965 "auth": { 00:18:01.965 "state": "completed", 00:18:01.965 "digest": "sha512", 00:18:01.965 "dhgroup": "ffdhe3072" 00:18:01.965 } 00:18:01.965 } 00:18:01.965 ]' 00:18:01.965 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.965 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.965 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.223 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.223 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.223 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.223 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.223 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.481 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:18:02.481 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:18:03.046 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.046 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:03.046 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.046 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.046 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.046 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.046 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.047 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.047 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.047 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:03.047 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.047 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.047 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:03.047 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:03.047 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.047 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.047 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.047 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.047 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.047 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.047 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.047 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.304 00:18:03.561 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.561 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.561 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.561 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.561 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.561 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.561 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.561 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.561 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.561 { 00:18:03.561 "cntlid": 121, 00:18:03.561 "qid": 0, 00:18:03.561 "state": "enabled", 00:18:03.561 "thread": "nvmf_tgt_poll_group_000", 00:18:03.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:03.561 "listen_address": { 00:18:03.561 "trtype": "TCP", 00:18:03.561 "adrfam": "IPv4", 00:18:03.561 "traddr": "10.0.0.2", 00:18:03.561 "trsvcid": "4420" 00:18:03.561 }, 00:18:03.561 "peer_address": { 00:18:03.561 "trtype": "TCP", 00:18:03.561 "adrfam": "IPv4", 00:18:03.561 "traddr": "10.0.0.1", 00:18:03.561 "trsvcid": "55388" 00:18:03.561 }, 00:18:03.561 "auth": { 00:18:03.561 "state": "completed", 00:18:03.561 "digest": "sha512", 00:18:03.561 "dhgroup": "ffdhe4096" 00:18:03.561 } 00:18:03.561 } 00:18:03.561 ]' 00:18:03.561 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.561 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.561 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.819 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.819 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.819 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.819 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.819 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.076 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:18:04.076 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.642 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.898 00:18:04.898 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.898 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.898 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.156 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.156 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.156 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.156 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.156 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.156 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.156 { 00:18:05.156 "cntlid": 123, 00:18:05.156 "qid": 0, 00:18:05.156 "state": "enabled", 00:18:05.156 "thread": "nvmf_tgt_poll_group_000", 00:18:05.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:05.156 "listen_address": { 00:18:05.156 "trtype": "TCP", 00:18:05.156 "adrfam": "IPv4", 00:18:05.156 "traddr": "10.0.0.2", 00:18:05.156 "trsvcid": "4420" 00:18:05.156 }, 00:18:05.156 "peer_address": { 00:18:05.156 "trtype": "TCP", 00:18:05.156 "adrfam": "IPv4", 00:18:05.156 "traddr": "10.0.0.1", 00:18:05.156 "trsvcid": "55420" 00:18:05.156 }, 00:18:05.156 "auth": { 00:18:05.156 "state": "completed", 00:18:05.156 "digest": "sha512", 00:18:05.156 "dhgroup": "ffdhe4096" 00:18:05.156 } 00:18:05.156 } 00:18:05.156 ]' 00:18:05.156 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.156 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.156 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.413 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.413 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.413 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.413 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.413 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.413 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:18:05.413 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:18:05.979 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.237 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.494 00:18:06.494 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.494 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.495 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.752 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.752 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.752 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.752 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.752 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.752 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.752 { 00:18:06.752 "cntlid": 125, 00:18:06.752 "qid": 0, 00:18:06.752 "state": "enabled", 00:18:06.752 "thread": "nvmf_tgt_poll_group_000", 00:18:06.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:06.752 "listen_address": { 00:18:06.752 "trtype": "TCP", 00:18:06.752 "adrfam": "IPv4", 00:18:06.752 "traddr": "10.0.0.2", 00:18:06.752 "trsvcid": "4420" 00:18:06.752 }, 00:18:06.752 "peer_address": { 00:18:06.752 "trtype": "TCP", 00:18:06.752 "adrfam": "IPv4", 00:18:06.752 "traddr": "10.0.0.1", 00:18:06.752 "trsvcid": "55442" 00:18:06.752 }, 00:18:06.752 "auth": { 00:18:06.752 "state": "completed", 00:18:06.752 "digest": "sha512", 00:18:06.752 "dhgroup": "ffdhe4096" 00:18:06.752 } 00:18:06.752 } 00:18:06.752 ]' 00:18:06.752 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.752 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.752 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.010 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:07.010 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.010 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.010 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.010 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.267 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:18:07.267 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.832 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.089 00:18:08.089 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.089 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.089 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.347 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.347 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.348 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.348 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.348 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.348 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.348 { 00:18:08.348 "cntlid": 127, 00:18:08.348 "qid": 0, 00:18:08.348 "state": "enabled", 00:18:08.348 "thread": "nvmf_tgt_poll_group_000", 00:18:08.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:08.348 "listen_address": { 00:18:08.348 "trtype": "TCP", 00:18:08.348 "adrfam": "IPv4", 00:18:08.348 "traddr": "10.0.0.2", 00:18:08.348 "trsvcid": "4420" 00:18:08.348 }, 00:18:08.348 "peer_address": { 00:18:08.348 "trtype": "TCP", 00:18:08.348 "adrfam": "IPv4", 00:18:08.348 "traddr": "10.0.0.1", 00:18:08.348 "trsvcid": "55462" 00:18:08.348 }, 00:18:08.348 "auth": { 00:18:08.348 "state": "completed", 00:18:08.348 "digest": "sha512", 00:18:08.348 "dhgroup": "ffdhe4096" 00:18:08.348 } 00:18:08.348 } 00:18:08.348 ]' 00:18:08.348 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.348 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.348 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.348 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.348 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.607 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.607 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.607 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.607 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:18:08.607 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:18:09.175 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.175 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:09.175 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.175 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.175 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.175 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.175 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.175 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.175 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.434 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:09.434 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.434 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.434 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:09.434 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:09.434 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.434 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.434 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.434 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.434 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.434 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.434 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.434 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.691 00:18:09.949 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.949 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.949 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.949 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.949 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.949 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.949 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.949 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.949 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.949 { 00:18:09.949 "cntlid": 129, 00:18:09.949 "qid": 0, 00:18:09.949 "state": "enabled", 00:18:09.949 "thread": "nvmf_tgt_poll_group_000", 00:18:09.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:09.949 "listen_address": { 00:18:09.949 "trtype": "TCP", 00:18:09.949 "adrfam": "IPv4", 00:18:09.949 "traddr": "10.0.0.2", 00:18:09.949 "trsvcid": "4420" 00:18:09.949 }, 00:18:09.949 "peer_address": { 00:18:09.949 "trtype": "TCP", 00:18:09.949 "adrfam": "IPv4", 00:18:09.949 "traddr": "10.0.0.1", 00:18:09.950 "trsvcid": "55502" 00:18:09.950 }, 00:18:09.950 "auth": { 00:18:09.950 "state": "completed", 00:18:09.950 "digest": "sha512", 00:18:09.950 "dhgroup": "ffdhe6144" 00:18:09.950 } 00:18:09.950 } 00:18:09.950 ]' 00:18:09.950 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.208 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.208 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.208 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.208 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.208 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.208 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.208 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.466 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:18:10.466 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:18:11.032 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.032 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:11.032 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.032 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.032 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.032 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.032 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.032 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.032 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:11.032 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.032 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.032 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:11.032 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:11.032 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.032 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.032 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.032 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.032 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.032 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.032 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.032 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.599 00:18:11.599 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.599 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.599 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.599 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.599 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.599 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.599 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.599 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.599 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.599 { 00:18:11.599 "cntlid": 131, 00:18:11.599 "qid": 0, 00:18:11.599 "state": "enabled", 00:18:11.599 "thread": "nvmf_tgt_poll_group_000", 00:18:11.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:11.599 "listen_address": { 00:18:11.599 "trtype": "TCP", 00:18:11.599 "adrfam": "IPv4", 00:18:11.599 "traddr": "10.0.0.2", 00:18:11.599 "trsvcid": "4420" 00:18:11.599 }, 00:18:11.599 "peer_address": { 00:18:11.599 "trtype": "TCP", 00:18:11.599 "adrfam": "IPv4", 00:18:11.599 "traddr": "10.0.0.1", 00:18:11.599 "trsvcid": "55530" 00:18:11.599 }, 00:18:11.599 "auth": { 00:18:11.599 "state": "completed", 00:18:11.599 "digest": "sha512", 00:18:11.599 "dhgroup": "ffdhe6144" 00:18:11.599 } 00:18:11.599 } 00:18:11.599 ]' 00:18:11.599 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.858 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.858 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.858 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:11.858 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.858 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.858 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.858 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.116 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:18:12.116 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.684 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.251 00:18:13.251 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.251 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.251 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.251 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.251 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.251 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.251 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.251 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.251 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.251 { 00:18:13.251 "cntlid": 133, 00:18:13.251 "qid": 0, 00:18:13.251 "state": "enabled", 00:18:13.251 "thread": "nvmf_tgt_poll_group_000", 00:18:13.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:13.251 "listen_address": { 00:18:13.251 "trtype": "TCP", 00:18:13.251 "adrfam": "IPv4", 00:18:13.251 "traddr": "10.0.0.2", 00:18:13.251 "trsvcid": "4420" 00:18:13.251 }, 00:18:13.251 "peer_address": { 00:18:13.251 "trtype": "TCP", 00:18:13.251 "adrfam": "IPv4", 00:18:13.251 "traddr": "10.0.0.1", 00:18:13.251 "trsvcid": "60332" 00:18:13.251 }, 00:18:13.251 "auth": { 00:18:13.251 "state": "completed", 00:18:13.251 "digest": "sha512", 00:18:13.251 "dhgroup": "ffdhe6144" 00:18:13.251 } 00:18:13.251 } 00:18:13.251 ]' 00:18:13.251 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.251 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.251 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.511 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:13.511 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.511 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.511 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.511 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.511 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:18:13.511 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:18:14.079 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.079 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.338 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.905 00:18:14.905 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.905 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.905 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.905 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.905 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.905 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.905 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.905 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.905 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.905 { 00:18:14.905 "cntlid": 135, 00:18:14.905 "qid": 0, 00:18:14.905 "state": "enabled", 00:18:14.905 "thread": "nvmf_tgt_poll_group_000", 00:18:14.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:14.905 "listen_address": { 00:18:14.905 "trtype": "TCP", 00:18:14.905 "adrfam": "IPv4", 00:18:14.905 "traddr": "10.0.0.2", 00:18:14.905 "trsvcid": "4420" 00:18:14.905 }, 00:18:14.905 "peer_address": { 00:18:14.905 "trtype": "TCP", 00:18:14.905 "adrfam": "IPv4", 00:18:14.905 "traddr": "10.0.0.1", 00:18:14.905 "trsvcid": "60360" 00:18:14.905 }, 00:18:14.905 "auth": { 00:18:14.905 "state": "completed", 00:18:14.905 "digest": "sha512", 00:18:14.905 "dhgroup": "ffdhe6144" 00:18:14.905 } 00:18:14.905 } 00:18:14.905 ]' 00:18:14.905 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.905 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.905 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.164 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:15.164 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.164 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.164 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.164 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.164 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:18:15.164 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:18:15.729 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.729 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:15.729 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.729 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.729 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.729 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.729 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.729 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.729 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.987 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:15.987 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.987 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.987 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:15.987 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:15.987 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.987 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.987 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.987 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.987 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.987 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.987 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.987 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.556 00:18:16.556 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.556 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.556 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.815 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.815 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.815 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.815 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.815 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.815 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.815 { 00:18:16.815 "cntlid": 137, 00:18:16.815 "qid": 0, 00:18:16.815 "state": "enabled", 00:18:16.815 "thread": "nvmf_tgt_poll_group_000", 00:18:16.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:16.815 "listen_address": { 00:18:16.815 "trtype": "TCP", 00:18:16.815 "adrfam": "IPv4", 00:18:16.815 "traddr": "10.0.0.2", 00:18:16.815 "trsvcid": "4420" 00:18:16.815 }, 00:18:16.815 "peer_address": { 00:18:16.815 "trtype": "TCP", 00:18:16.815 "adrfam": "IPv4", 00:18:16.815 "traddr": "10.0.0.1", 00:18:16.815 "trsvcid": "60386" 00:18:16.815 }, 00:18:16.815 "auth": { 00:18:16.815 "state": "completed", 00:18:16.815 "digest": "sha512", 00:18:16.815 "dhgroup": "ffdhe8192" 00:18:16.815 } 00:18:16.815 } 00:18:16.815 ]' 00:18:16.815 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.815 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.815 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.815 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.815 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.815 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.815 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.815 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.073 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:18:17.073 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:18:17.639 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.639 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:17.639 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.639 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.639 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.639 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.639 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.639 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.899 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:17.899 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.899 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.899 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:17.899 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:17.899 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.899 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.899 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.899 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.899 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.899 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.899 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.899 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.466 00:18:18.466 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.466 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.466 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.466 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.466 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.466 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.466 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.466 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.466 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.466 { 00:18:18.466 "cntlid": 139, 00:18:18.466 "qid": 0, 00:18:18.466 "state": "enabled", 00:18:18.466 "thread": "nvmf_tgt_poll_group_000", 00:18:18.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:18.466 "listen_address": { 00:18:18.466 "trtype": "TCP", 00:18:18.466 "adrfam": "IPv4", 00:18:18.466 "traddr": "10.0.0.2", 00:18:18.466 "trsvcid": "4420" 00:18:18.466 }, 00:18:18.466 "peer_address": { 00:18:18.466 "trtype": "TCP", 00:18:18.466 "adrfam": "IPv4", 00:18:18.466 "traddr": "10.0.0.1", 00:18:18.466 "trsvcid": "60420" 00:18:18.466 }, 00:18:18.466 "auth": { 00:18:18.466 "state": "completed", 00:18:18.466 "digest": "sha512", 00:18:18.466 "dhgroup": "ffdhe8192" 00:18:18.466 } 00:18:18.466 } 00:18:18.466 ]' 00:18:18.466 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.725 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.725 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.725 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.725 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.725 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.725 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.725 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.984 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:18:18.984 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: --dhchap-ctrl-secret DHHC-1:02:NTA5ZGE2OTVmMjMwYmNmYjRkOTZjNDUxNDFiNmZkN2E5NmI3MWJjZjNjZmFjMDM3enLH5w==: 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.551 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.118 00:18:20.118 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.118 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.118 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.376 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.376 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.376 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.376 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.376 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.376 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.376 { 00:18:20.376 "cntlid": 141, 00:18:20.376 "qid": 0, 00:18:20.376 "state": "enabled", 00:18:20.376 "thread": "nvmf_tgt_poll_group_000", 00:18:20.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:20.376 "listen_address": { 00:18:20.376 "trtype": "TCP", 00:18:20.376 "adrfam": "IPv4", 00:18:20.376 "traddr": "10.0.0.2", 00:18:20.376 "trsvcid": "4420" 00:18:20.376 }, 00:18:20.376 "peer_address": { 00:18:20.376 "trtype": "TCP", 00:18:20.376 "adrfam": "IPv4", 00:18:20.376 "traddr": "10.0.0.1", 00:18:20.376 "trsvcid": "60450" 00:18:20.376 }, 00:18:20.376 "auth": { 00:18:20.376 "state": "completed", 00:18:20.376 "digest": "sha512", 00:18:20.376 "dhgroup": "ffdhe8192" 00:18:20.376 } 00:18:20.376 } 00:18:20.376 ]' 00:18:20.376 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.376 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.376 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.376 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.376 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.376 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.376 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.376 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.635 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:18:20.635 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:01:ZjQxODZjNjcyY2YwMjdhZjRjMDU3ZjFiMGIyN2JjOGXA0tmp: 00:18:21.202 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.202 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:21.202 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.202 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.202 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.202 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.202 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:21.202 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:21.460 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:21.460 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.460 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:21.460 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:21.460 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:21.460 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.460 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:21.460 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.460 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.460 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.460 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:21.460 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.461 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.026 00:18:22.026 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.026 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.026 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.284 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.284 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.284 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.284 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.284 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.284 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.284 { 00:18:22.284 "cntlid": 143, 00:18:22.284 "qid": 0, 00:18:22.284 "state": "enabled", 00:18:22.284 "thread": "nvmf_tgt_poll_group_000", 00:18:22.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:22.284 "listen_address": { 00:18:22.284 "trtype": "TCP", 00:18:22.284 "adrfam": "IPv4", 00:18:22.284 "traddr": "10.0.0.2", 00:18:22.284 "trsvcid": "4420" 00:18:22.284 }, 00:18:22.284 "peer_address": { 00:18:22.284 "trtype": "TCP", 00:18:22.284 "adrfam": "IPv4", 00:18:22.284 "traddr": "10.0.0.1", 00:18:22.284 "trsvcid": "60480" 00:18:22.284 }, 00:18:22.284 "auth": { 00:18:22.284 "state": "completed", 00:18:22.284 "digest": "sha512", 00:18:22.284 "dhgroup": "ffdhe8192" 00:18:22.284 } 00:18:22.284 } 00:18:22.284 ]' 00:18:22.284 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.284 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.284 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.284 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.284 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.284 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.284 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.284 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.541 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:18:22.541 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:18:23.106 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.106 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:23.106 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.106 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.106 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.106 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:23.106 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:23.106 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:23.106 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:23.106 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:23.106 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:23.364 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:23.364 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.364 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.364 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:23.364 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:23.364 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.364 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.364 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.364 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.364 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.364 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.364 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.364 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.931 00:18:23.931 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.931 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.931 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.931 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.931 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.931 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.931 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.931 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.931 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.931 { 00:18:23.931 "cntlid": 145, 00:18:23.931 "qid": 0, 00:18:23.931 "state": "enabled", 00:18:23.931 "thread": "nvmf_tgt_poll_group_000", 00:18:23.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:23.931 "listen_address": { 00:18:23.931 "trtype": "TCP", 00:18:23.931 "adrfam": "IPv4", 00:18:23.931 "traddr": "10.0.0.2", 00:18:23.931 "trsvcid": "4420" 00:18:23.931 }, 00:18:23.931 "peer_address": { 00:18:23.931 "trtype": "TCP", 00:18:23.931 "adrfam": "IPv4", 00:18:23.931 "traddr": "10.0.0.1", 00:18:23.931 "trsvcid": "45522" 00:18:23.931 }, 00:18:23.931 "auth": { 00:18:23.931 "state": "completed", 00:18:23.931 "digest": "sha512", 00:18:23.931 "dhgroup": "ffdhe8192" 00:18:23.931 } 00:18:23.931 } 00:18:23.931 ]' 00:18:23.931 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.931 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.931 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.189 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.189 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.189 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.189 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.189 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.189 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:18:24.189 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDRjMGI2NmRhNTc0Mjg2MmI5NmFlMmNjYjA1NGRiYzBhMzhmNDBmMTdkOGM5OWQ3UtTA/Q==: --dhchap-ctrl-secret DHHC-1:03:NjIwMjZkNmRlOTMzMWFkYzdhNjY3MGE0MmRhN2ZkNDQzMDAyNjMxNDU4ODkxZGVmNmVhMTQ4MzZmYjgyYzM0NIYNcfE=: 00:18:24.755 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.755 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:24.755 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.755 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.755 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.755 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:24.755 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.013 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.013 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.013 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:25.013 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:25.013 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:25.013 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:25.013 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.013 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:25.013 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.013 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:25.013 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:25.013 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:25.271 request: 00:18:25.271 { 00:18:25.271 "name": "nvme0", 00:18:25.271 "trtype": "tcp", 00:18:25.271 "traddr": "10.0.0.2", 00:18:25.271 "adrfam": "ipv4", 00:18:25.271 "trsvcid": "4420", 00:18:25.271 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:25.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:25.271 "prchk_reftag": false, 00:18:25.271 "prchk_guard": false, 00:18:25.271 "hdgst": false, 00:18:25.271 "ddgst": false, 00:18:25.271 "dhchap_key": "key2", 00:18:25.271 "allow_unrecognized_csi": false, 00:18:25.271 "method": "bdev_nvme_attach_controller", 00:18:25.271 "req_id": 1 00:18:25.271 } 00:18:25.271 Got JSON-RPC error response 00:18:25.271 response: 00:18:25.271 { 00:18:25.271 "code": -5, 00:18:25.271 "message": "Input/output error" 00:18:25.271 } 00:18:25.271 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:25.271 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.271 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.271 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.271 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:25.271 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.271 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.271 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.271 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.271 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.271 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.271 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.271 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:25.271 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:25.271 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:25.271 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:25.271 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.272 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:25.272 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.272 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:25.272 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:25.272 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:25.838 request: 00:18:25.838 { 00:18:25.838 "name": "nvme0", 00:18:25.838 "trtype": "tcp", 00:18:25.838 "traddr": "10.0.0.2", 00:18:25.838 "adrfam": "ipv4", 00:18:25.838 "trsvcid": "4420", 00:18:25.838 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:25.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:25.838 "prchk_reftag": false, 00:18:25.838 "prchk_guard": false, 00:18:25.838 "hdgst": false, 00:18:25.838 "ddgst": false, 00:18:25.838 "dhchap_key": "key1", 00:18:25.838 "dhchap_ctrlr_key": "ckey2", 00:18:25.838 "allow_unrecognized_csi": false, 00:18:25.838 "method": "bdev_nvme_attach_controller", 00:18:25.838 "req_id": 1 00:18:25.838 } 00:18:25.838 Got JSON-RPC error response 00:18:25.838 response: 00:18:25.838 { 00:18:25.838 "code": -5, 00:18:25.838 "message": "Input/output error" 00:18:25.838 } 00:18:25.838 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:25.838 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.838 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.838 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.838 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:25.838 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.838 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.838 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.838 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:25.838 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.838 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.838 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.838 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.838 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:25.838 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.838 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:25.838 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.838 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:25.839 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.839 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.839 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.839 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.406 request: 00:18:26.406 { 00:18:26.406 "name": "nvme0", 00:18:26.406 "trtype": "tcp", 00:18:26.406 "traddr": "10.0.0.2", 00:18:26.406 "adrfam": "ipv4", 00:18:26.406 "trsvcid": "4420", 00:18:26.406 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:26.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:26.406 "prchk_reftag": false, 00:18:26.406 "prchk_guard": false, 00:18:26.406 "hdgst": false, 00:18:26.406 "ddgst": false, 00:18:26.406 "dhchap_key": "key1", 00:18:26.406 "dhchap_ctrlr_key": "ckey1", 00:18:26.406 "allow_unrecognized_csi": false, 00:18:26.406 "method": "bdev_nvme_attach_controller", 00:18:26.406 "req_id": 1 00:18:26.406 } 00:18:26.406 Got JSON-RPC error response 00:18:26.406 response: 00:18:26.406 { 00:18:26.406 "code": -5, 00:18:26.406 "message": "Input/output error" 00:18:26.406 } 00:18:26.406 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:26.406 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.406 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.406 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.406 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:26.406 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.406 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.406 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.406 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1289888 00:18:26.406 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1289888 ']' 00:18:26.406 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1289888 00:18:26.406 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:26.406 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.406 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1289888 00:18:26.406 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:26.406 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:26.407 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1289888' 00:18:26.407 killing process with pid 1289888 00:18:26.407 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1289888 00:18:26.407 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1289888 00:18:26.407 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:26.407 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:26.407 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:26.407 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.407 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:26.407 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1311156 00:18:26.407 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1311156 00:18:26.407 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1311156 ']' 00:18:26.407 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.407 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.407 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.407 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.407 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.665 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.665 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:26.665 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:26.665 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:26.665 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.665 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.665 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:26.665 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1311156 00:18:26.665 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1311156 ']' 00:18:26.665 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.665 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.665 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.665 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.665 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.923 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.923 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:26.923 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:26.923 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.923 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.923 null0 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.b4b 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.1Vp ]] 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1Vp 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.OWm 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.zC7 ]] 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zC7 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.foT 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.uXQ ]] 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uXQ 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.jdN 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.180 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.745 nvme0n1 00:18:28.003 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.003 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.003 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.003 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.003 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.003 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.003 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.003 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.003 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.003 { 00:18:28.003 "cntlid": 1, 00:18:28.003 "qid": 0, 00:18:28.003 "state": "enabled", 00:18:28.003 "thread": "nvmf_tgt_poll_group_000", 00:18:28.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:28.003 "listen_address": { 00:18:28.003 "trtype": "TCP", 00:18:28.003 "adrfam": "IPv4", 00:18:28.003 "traddr": "10.0.0.2", 00:18:28.003 "trsvcid": "4420" 00:18:28.003 }, 00:18:28.003 "peer_address": { 00:18:28.003 "trtype": "TCP", 00:18:28.003 "adrfam": "IPv4", 00:18:28.003 "traddr": "10.0.0.1", 00:18:28.003 "trsvcid": "45570" 00:18:28.003 }, 00:18:28.003 "auth": { 00:18:28.003 "state": "completed", 00:18:28.003 "digest": "sha512", 00:18:28.003 "dhgroup": "ffdhe8192" 00:18:28.003 } 00:18:28.003 } 00:18:28.003 ]' 00:18:28.003 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.261 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.261 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.261 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.261 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.261 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.261 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.261 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.518 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:18:28.518 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:18:29.159 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.159 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.468 request: 00:18:29.468 { 00:18:29.468 "name": "nvme0", 00:18:29.468 "trtype": "tcp", 00:18:29.468 "traddr": "10.0.0.2", 00:18:29.468 "adrfam": "ipv4", 00:18:29.468 "trsvcid": "4420", 00:18:29.468 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:29.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:29.468 "prchk_reftag": false, 00:18:29.468 "prchk_guard": false, 00:18:29.468 "hdgst": false, 00:18:29.468 "ddgst": false, 00:18:29.468 "dhchap_key": "key3", 00:18:29.468 "allow_unrecognized_csi": false, 00:18:29.468 "method": "bdev_nvme_attach_controller", 00:18:29.468 "req_id": 1 00:18:29.468 } 00:18:29.468 Got JSON-RPC error response 00:18:29.468 response: 00:18:29.468 { 00:18:29.468 "code": -5, 00:18:29.468 "message": "Input/output error" 00:18:29.468 } 00:18:29.468 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:29.468 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:29.468 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:29.468 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:29.468 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:29.468 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:29.468 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:29.468 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:29.727 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:29.727 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:29.727 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:29.727 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:29.727 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.727 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:29.727 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.727 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:29.727 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.727 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.987 request: 00:18:29.987 { 00:18:29.987 "name": "nvme0", 00:18:29.987 "trtype": "tcp", 00:18:29.987 "traddr": "10.0.0.2", 00:18:29.987 "adrfam": "ipv4", 00:18:29.987 "trsvcid": "4420", 00:18:29.987 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:29.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:29.987 "prchk_reftag": false, 00:18:29.987 "prchk_guard": false, 00:18:29.987 "hdgst": false, 00:18:29.987 "ddgst": false, 00:18:29.987 "dhchap_key": "key3", 00:18:29.987 "allow_unrecognized_csi": false, 00:18:29.987 "method": "bdev_nvme_attach_controller", 00:18:29.987 "req_id": 1 00:18:29.987 } 00:18:29.987 Got JSON-RPC error response 00:18:29.987 response: 00:18:29.987 { 00:18:29.987 "code": -5, 00:18:29.987 "message": "Input/output error" 00:18:29.987 } 00:18:29.987 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:29.987 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:29.987 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:29.987 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:29.987 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:29.987 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:29.987 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:29.987 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:29.987 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:29.987 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:29.987 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:29.987 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.245 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.245 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.245 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:30.245 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.245 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.245 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.245 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:30.245 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:30.245 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:30.245 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:30.245 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.245 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:30.245 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.245 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:30.245 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:30.246 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:30.504 request: 00:18:30.504 { 00:18:30.504 "name": "nvme0", 00:18:30.504 "trtype": "tcp", 00:18:30.504 "traddr": "10.0.0.2", 00:18:30.504 "adrfam": "ipv4", 00:18:30.504 "trsvcid": "4420", 00:18:30.504 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:30.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:30.504 "prchk_reftag": false, 00:18:30.504 "prchk_guard": false, 00:18:30.504 "hdgst": false, 00:18:30.504 "ddgst": false, 00:18:30.504 "dhchap_key": "key0", 00:18:30.504 "dhchap_ctrlr_key": "key1", 00:18:30.504 "allow_unrecognized_csi": false, 00:18:30.504 "method": "bdev_nvme_attach_controller", 00:18:30.504 "req_id": 1 00:18:30.504 } 00:18:30.504 Got JSON-RPC error response 00:18:30.504 response: 00:18:30.504 { 00:18:30.504 "code": -5, 00:18:30.504 "message": "Input/output error" 00:18:30.504 } 00:18:30.504 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:30.504 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.504 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.504 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.504 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:30.504 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:30.504 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:30.763 nvme0n1 00:18:30.763 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:30.763 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:30.763 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.022 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.022 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.022 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.022 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:31.022 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.022 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.281 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.281 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:31.281 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:31.281 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:31.847 nvme0n1 00:18:31.847 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:31.847 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:31.847 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.105 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.105 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:32.105 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.105 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.105 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.105 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:32.105 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:32.105 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.364 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.364 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:18:32.364 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: --dhchap-ctrl-secret DHHC-1:03:YjFjNmJlZWVmYWFiNTI5YTE3MzJiMjY1Njg5OTEyNzZlMDQ1NmQ5OTllMzA4Nzc5N2M3NjJiOTkxMTVlODdjNbaWmxs=: 00:18:32.930 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:32.930 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:32.930 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:32.930 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:32.930 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:32.930 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:32.930 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:32.930 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.930 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.188 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:33.188 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:33.188 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:33.188 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:33.188 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:33.188 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:33.188 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:33.188 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:33.188 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:33.188 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:33.446 request: 00:18:33.446 { 00:18:33.446 "name": "nvme0", 00:18:33.446 "trtype": "tcp", 00:18:33.446 "traddr": "10.0.0.2", 00:18:33.446 "adrfam": "ipv4", 00:18:33.446 "trsvcid": "4420", 00:18:33.446 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:33.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:33.446 "prchk_reftag": false, 00:18:33.446 "prchk_guard": false, 00:18:33.446 "hdgst": false, 00:18:33.446 "ddgst": false, 00:18:33.446 "dhchap_key": "key1", 00:18:33.446 "allow_unrecognized_csi": false, 00:18:33.446 "method": "bdev_nvme_attach_controller", 00:18:33.446 "req_id": 1 00:18:33.446 } 00:18:33.446 Got JSON-RPC error response 00:18:33.446 response: 00:18:33.446 { 00:18:33.446 "code": -5, 00:18:33.446 "message": "Input/output error" 00:18:33.446 } 00:18:33.446 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:33.446 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:33.446 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:33.446 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:33.446 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:33.446 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:33.446 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:34.381 nvme0n1 00:18:34.381 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:34.381 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:34.381 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.381 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.381 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.381 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.639 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:34.639 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.639 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.639 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.639 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:34.639 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:34.639 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:34.897 nvme0n1 00:18:34.897 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:34.897 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:34.897 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.156 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.156 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.156 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.414 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:35.414 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.414 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.414 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.414 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: '' 2s 00:18:35.414 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:35.414 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:35.414 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: 00:18:35.414 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:35.414 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:35.414 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:35.414 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: ]] 00:18:35.414 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NTkwNDI4NDRlN2NlMzgwYmQ5MDg0MDlmZTk5MWJlMWEcpErb: 00:18:35.414 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:35.414 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:35.414 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: 2s 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: ]] 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YmQwOTIyMWMzZmYwN2RmYmM3NGVjZWE4YmZiOTc3ZmE0NTY5NWYxMDMwZWIyZGI44Qxqew==: 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:37.316 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:39.842 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:39.842 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:39.842 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:39.842 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:39.842 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:39.842 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:39.842 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:39.842 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.842 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:39.842 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.842 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.842 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.842 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:39.842 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:39.842 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:40.099 nvme0n1 00:18:40.357 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:40.357 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.357 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.357 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.357 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:40.357 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:40.615 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:40.615 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:40.615 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.872 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.872 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:40.872 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.872 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.872 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.872 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:40.872 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:41.130 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:41.130 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:41.130 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.388 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.388 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:41.388 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.388 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.388 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.388 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:41.388 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:41.388 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:41.388 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:41.388 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.388 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:41.388 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.388 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:41.388 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:41.955 request: 00:18:41.955 { 00:18:41.955 "name": "nvme0", 00:18:41.955 "dhchap_key": "key1", 00:18:41.955 "dhchap_ctrlr_key": "key3", 00:18:41.955 "method": "bdev_nvme_set_keys", 00:18:41.955 "req_id": 1 00:18:41.955 } 00:18:41.955 Got JSON-RPC error response 00:18:41.955 response: 00:18:41.955 { 00:18:41.955 "code": -13, 00:18:41.955 "message": "Permission denied" 00:18:41.955 } 00:18:41.955 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:41.955 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.955 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.955 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.955 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:41.955 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.955 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:41.955 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:41.955 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:42.891 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:42.891 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:42.891 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.150 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:43.150 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:43.150 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.150 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.150 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.150 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:43.150 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:43.150 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:44.087 nvme0n1 00:18:44.087 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:44.087 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.087 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.087 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.087 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:44.087 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:44.087 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:44.087 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:44.087 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.088 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:44.088 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.088 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:44.088 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:44.346 request: 00:18:44.346 { 00:18:44.346 "name": "nvme0", 00:18:44.346 "dhchap_key": "key2", 00:18:44.346 "dhchap_ctrlr_key": "key0", 00:18:44.346 "method": "bdev_nvme_set_keys", 00:18:44.346 "req_id": 1 00:18:44.346 } 00:18:44.346 Got JSON-RPC error response 00:18:44.346 response: 00:18:44.346 { 00:18:44.346 "code": -13, 00:18:44.346 "message": "Permission denied" 00:18:44.346 } 00:18:44.346 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:44.346 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:44.346 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:44.346 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:44.346 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:44.346 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:44.346 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.604 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:44.604 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:45.539 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:45.539 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:45.539 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.798 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:45.798 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:45.798 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:45.798 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1289911 00:18:45.798 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1289911 ']' 00:18:45.798 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1289911 00:18:45.798 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:45.798 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.798 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1289911 00:18:45.798 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:45.798 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:45.798 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1289911' 00:18:45.798 killing process with pid 1289911 00:18:45.798 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1289911 00:18:45.798 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1289911 00:18:46.057 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:46.057 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:46.057 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:46.057 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:46.057 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:46.057 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:46.057 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:46.057 rmmod nvme_tcp 00:18:46.057 rmmod nvme_fabrics 00:18:46.057 rmmod nvme_keyring 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1311156 ']' 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1311156 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1311156 ']' 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1311156 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1311156 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1311156' 00:18:46.316 killing process with pid 1311156 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1311156 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1311156 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.316 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.849 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:48.849 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.b4b /tmp/spdk.key-sha256.OWm /tmp/spdk.key-sha384.foT /tmp/spdk.key-sha512.jdN /tmp/spdk.key-sha512.1Vp /tmp/spdk.key-sha384.zC7 /tmp/spdk.key-sha256.uXQ '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:48.849 00:18:48.849 real 2m31.154s 00:18:48.849 user 5m48.240s 00:18:48.849 sys 0m24.189s 00:18:48.849 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.849 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.849 ************************************ 00:18:48.849 END TEST nvmf_auth_target 00:18:48.849 ************************************ 00:18:48.849 21:11:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:48.850 ************************************ 00:18:48.850 START TEST nvmf_bdevio_no_huge 00:18:48.850 ************************************ 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:48.850 * Looking for test storage... 00:18:48.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:48.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.850 --rc genhtml_branch_coverage=1 00:18:48.850 --rc genhtml_function_coverage=1 00:18:48.850 --rc genhtml_legend=1 00:18:48.850 --rc geninfo_all_blocks=1 00:18:48.850 --rc geninfo_unexecuted_blocks=1 00:18:48.850 00:18:48.850 ' 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:48.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.850 --rc genhtml_branch_coverage=1 00:18:48.850 --rc genhtml_function_coverage=1 00:18:48.850 --rc genhtml_legend=1 00:18:48.850 --rc geninfo_all_blocks=1 00:18:48.850 --rc geninfo_unexecuted_blocks=1 00:18:48.850 00:18:48.850 ' 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:48.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.850 --rc genhtml_branch_coverage=1 00:18:48.850 --rc genhtml_function_coverage=1 00:18:48.850 --rc genhtml_legend=1 00:18:48.850 --rc geninfo_all_blocks=1 00:18:48.850 --rc geninfo_unexecuted_blocks=1 00:18:48.850 00:18:48.850 ' 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:48.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.850 --rc genhtml_branch_coverage=1 00:18:48.850 --rc genhtml_function_coverage=1 00:18:48.850 --rc genhtml_legend=1 00:18:48.850 --rc geninfo_all_blocks=1 00:18:48.850 --rc geninfo_unexecuted_blocks=1 00:18:48.850 00:18:48.850 ' 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:48.850 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:48.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:48.851 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:55.427 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:55.428 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:55.428 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:55.428 Found net devices under 0000:86:00.0: cvl_0_0 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:55.428 Found net devices under 0000:86:00.1: cvl_0_1 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:55.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:18:55.428 00:18:55.428 --- 10.0.0.2 ping statistics --- 00:18:55.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.428 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:55.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:18:55.428 00:18:55.428 --- 10.0.0.1 ping statistics --- 00:18:55.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.428 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1318168 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1318168 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1318168 ']' 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.428 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.428 [2024-12-05 21:12:02.757927] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:18:55.428 [2024-12-05 21:12:02.757977] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:55.428 [2024-12-05 21:12:02.844113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:55.429 [2024-12-05 21:12:02.890203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.429 [2024-12-05 21:12:02.890235] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.429 [2024-12-05 21:12:02.890243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.429 [2024-12-05 21:12:02.890249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.429 [2024-12-05 21:12:02.890254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.429 [2024-12-05 21:12:02.891331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:55.429 [2024-12-05 21:12:02.891440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:55.429 [2024-12-05 21:12:02.891552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:55.429 [2024-12-05 21:12:02.891553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:55.687 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.687 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:55.687 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:55.687 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:55.687 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.687 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.688 [2024-12-05 21:12:03.640085] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.688 Malloc0 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.688 [2024-12-05 21:12:03.684356] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:55.688 { 00:18:55.688 "params": { 00:18:55.688 "name": "Nvme$subsystem", 00:18:55.688 "trtype": "$TEST_TRANSPORT", 00:18:55.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:55.688 "adrfam": "ipv4", 00:18:55.688 "trsvcid": "$NVMF_PORT", 00:18:55.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:55.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:55.688 "hdgst": ${hdgst:-false}, 00:18:55.688 "ddgst": ${ddgst:-false} 00:18:55.688 }, 00:18:55.688 "method": "bdev_nvme_attach_controller" 00:18:55.688 } 00:18:55.688 EOF 00:18:55.688 )") 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:55.688 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:55.688 "params": { 00:18:55.688 "name": "Nvme1", 00:18:55.688 "trtype": "tcp", 00:18:55.688 "traddr": "10.0.0.2", 00:18:55.688 "adrfam": "ipv4", 00:18:55.688 "trsvcid": "4420", 00:18:55.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.688 "hdgst": false, 00:18:55.688 "ddgst": false 00:18:55.688 }, 00:18:55.688 "method": "bdev_nvme_attach_controller" 00:18:55.688 }' 00:18:55.688 [2024-12-05 21:12:03.737316] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:18:55.688 [2024-12-05 21:12:03.737365] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1318417 ] 00:18:55.946 [2024-12-05 21:12:03.815959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:55.946 [2024-12-05 21:12:03.864965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.946 [2024-12-05 21:12:03.865090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.946 [2024-12-05 21:12:03.865090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.204 I/O targets: 00:18:56.204 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:56.204 00:18:56.204 00:18:56.204 CUnit - A unit testing framework for C - Version 2.1-3 00:18:56.204 http://cunit.sourceforge.net/ 00:18:56.204 00:18:56.204 00:18:56.204 Suite: bdevio tests on: Nvme1n1 00:18:56.204 Test: blockdev write read block ...passed 00:18:56.204 Test: blockdev write zeroes read block ...passed 00:18:56.204 Test: blockdev write zeroes read no split ...passed 00:18:56.461 Test: blockdev write zeroes read split ...passed 00:18:56.461 Test: blockdev write zeroes read split partial ...passed 00:18:56.461 Test: blockdev reset ...[2024-12-05 21:12:04.359116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:56.461 [2024-12-05 21:12:04.359176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1804510 (9): Bad file descriptor 00:18:56.461 [2024-12-05 21:12:04.462907] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:56.461 passed 00:18:56.461 Test: blockdev write read 8 blocks ...passed 00:18:56.461 Test: blockdev write read size > 128k ...passed 00:18:56.461 Test: blockdev write read invalid size ...passed 00:18:56.461 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:56.461 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:56.461 Test: blockdev write read max offset ...passed 00:18:56.718 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:56.718 Test: blockdev writev readv 8 blocks ...passed 00:18:56.718 Test: blockdev writev readv 30 x 1block ...passed 00:18:56.718 Test: blockdev writev readv block ...passed 00:18:56.718 Test: blockdev writev readv size > 128k ...passed 00:18:56.718 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:56.718 Test: blockdev comparev and writev ...[2024-12-05 21:12:04.673301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:56.718 [2024-12-05 21:12:04.673330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.718 [2024-12-05 21:12:04.673344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:56.719 [2024-12-05 21:12:04.673352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:56.719 [2024-12-05 21:12:04.673595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:56.719 [2024-12-05 21:12:04.673605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:56.719 [2024-12-05 21:12:04.673617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:56.719 [2024-12-05 21:12:04.673625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:56.719 [2024-12-05 21:12:04.673857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:56.719 [2024-12-05 21:12:04.673867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:56.719 [2024-12-05 21:12:04.673878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:56.719 [2024-12-05 21:12:04.673885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:56.719 [2024-12-05 21:12:04.674122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:56.719 [2024-12-05 21:12:04.674131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:56.719 [2024-12-05 21:12:04.674142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:56.719 [2024-12-05 21:12:04.674149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:56.719 passed 00:18:56.719 Test: blockdev nvme passthru rw ...passed 00:18:56.719 Test: blockdev nvme passthru vendor specific ...[2024-12-05 21:12:04.755685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:56.719 [2024-12-05 21:12:04.755701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:56.719 [2024-12-05 21:12:04.755812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:56.719 [2024-12-05 21:12:04.755821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:56.719 [2024-12-05 21:12:04.755941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:56.719 [2024-12-05 21:12:04.755950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:56.719 [2024-12-05 21:12:04.756063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:56.719 [2024-12-05 21:12:04.756078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:56.719 passed 00:18:56.719 Test: blockdev nvme admin passthru ...passed 00:18:56.719 Test: blockdev copy ...passed 00:18:56.719 00:18:56.719 Run Summary: Type Total Ran Passed Failed Inactive 00:18:56.719 suites 1 1 n/a 0 0 00:18:56.719 tests 23 23 23 0 0 00:18:56.719 asserts 152 152 152 0 n/a 00:18:56.719 00:18:56.719 Elapsed time = 1.241 seconds 00:18:56.976 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:56.976 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.976 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:57.232 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.232 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:57.233 rmmod nvme_tcp 00:18:57.233 rmmod nvme_fabrics 00:18:57.233 rmmod nvme_keyring 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1318168 ']' 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1318168 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1318168 ']' 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1318168 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1318168 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1318168' 00:18:57.233 killing process with pid 1318168 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1318168 00:18:57.233 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1318168 00:18:57.491 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:57.491 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:57.491 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:57.491 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:57.491 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:57.491 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:57.491 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:57.491 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:57.491 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:57.491 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.491 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.491 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:00.020 00:19:00.020 real 0m11.019s 00:19:00.020 user 0m14.550s 00:19:00.020 sys 0m5.457s 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:00.020 ************************************ 00:19:00.020 END TEST nvmf_bdevio_no_huge 00:19:00.020 ************************************ 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:00.020 ************************************ 00:19:00.020 START TEST nvmf_tls 00:19:00.020 ************************************ 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:00.020 * Looking for test storage... 00:19:00.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:00.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.020 --rc genhtml_branch_coverage=1 00:19:00.020 --rc genhtml_function_coverage=1 00:19:00.020 --rc genhtml_legend=1 00:19:00.020 --rc geninfo_all_blocks=1 00:19:00.020 --rc geninfo_unexecuted_blocks=1 00:19:00.020 00:19:00.020 ' 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:00.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.020 --rc genhtml_branch_coverage=1 00:19:00.020 --rc genhtml_function_coverage=1 00:19:00.020 --rc genhtml_legend=1 00:19:00.020 --rc geninfo_all_blocks=1 00:19:00.020 --rc geninfo_unexecuted_blocks=1 00:19:00.020 00:19:00.020 ' 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:00.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.020 --rc genhtml_branch_coverage=1 00:19:00.020 --rc genhtml_function_coverage=1 00:19:00.020 --rc genhtml_legend=1 00:19:00.020 --rc geninfo_all_blocks=1 00:19:00.020 --rc geninfo_unexecuted_blocks=1 00:19:00.020 00:19:00.020 ' 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:00.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.020 --rc genhtml_branch_coverage=1 00:19:00.020 --rc genhtml_function_coverage=1 00:19:00.020 --rc genhtml_legend=1 00:19:00.020 --rc geninfo_all_blocks=1 00:19:00.020 --rc geninfo_unexecuted_blocks=1 00:19:00.020 00:19:00.020 ' 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.020 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:00.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:00.021 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.590 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.590 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:06.590 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:06.590 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:06.590 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:06.591 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:06.591 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:06.591 Found net devices under 0000:86:00.0: cvl_0_0 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:06.591 Found net devices under 0000:86:00.1: cvl_0_1 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:06.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:19:06.591 00:19:06.591 --- 10.0.0.2 ping statistics --- 00:19:06.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.591 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:19:06.591 00:19:06.591 --- 10.0.0.1 ping statistics --- 00:19:06.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.591 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:06.591 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:06.592 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:06.592 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:06.592 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.592 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1322569 00:19:06.592 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1322569 00:19:06.592 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:06.592 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1322569 ']' 00:19:06.592 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.592 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.592 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.592 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.592 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.592 [2024-12-05 21:12:13.850838] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:19:06.592 [2024-12-05 21:12:13.850880] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.592 [2024-12-05 21:12:13.928963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.592 [2024-12-05 21:12:13.968754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.592 [2024-12-05 21:12:13.968790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.592 [2024-12-05 21:12:13.968797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.592 [2024-12-05 21:12:13.968803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.592 [2024-12-05 21:12:13.968808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.592 [2024-12-05 21:12:13.969364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.592 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.592 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:06.592 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:06.592 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:06.592 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.592 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.592 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:06.592 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:06.592 true 00:19:06.592 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:06.592 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:06.592 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:06.592 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:06.592 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:06.592 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:06.592 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:06.850 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:06.850 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:06.851 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:07.109 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:07.109 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:07.109 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:07.109 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:07.109 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:07.109 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:07.366 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:07.366 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:07.366 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:07.624 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:07.624 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:07.624 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:07.624 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:07.624 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:07.882 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:07.882 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.yMeXSarFbr 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.tTvXbKesto 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.yMeXSarFbr 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.tTvXbKesto 00:19:08.141 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:08.400 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:08.659 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.yMeXSarFbr 00:19:08.659 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yMeXSarFbr 00:19:08.659 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:08.917 [2024-12-05 21:12:16.788206] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.917 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:08.917 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:09.176 [2024-12-05 21:12:17.149119] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:09.176 [2024-12-05 21:12:17.149348] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.176 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:09.435 malloc0 00:19:09.435 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:09.693 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yMeXSarFbr 00:19:09.693 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:09.951 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.yMeXSarFbr 00:19:19.927 Initializing NVMe Controllers 00:19:19.927 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:19.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:19.927 Initialization complete. Launching workers. 00:19:19.927 ======================================================== 00:19:19.927 Latency(us) 00:19:19.927 Device Information : IOPS MiB/s Average min max 00:19:19.927 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16567.79 64.72 3862.98 785.32 204876.50 00:19:19.927 ======================================================== 00:19:19.927 Total : 16567.79 64.72 3862.98 785.32 204876.50 00:19:19.927 00:19:19.927 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yMeXSarFbr 00:19:19.927 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:19.927 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:19.927 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:19.927 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yMeXSarFbr 00:19:19.927 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:19.927 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1324919 00:19:19.927 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:19.927 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:19.927 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1324919 /var/tmp/bdevperf.sock 00:19:19.927 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1324919 ']' 00:19:19.927 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.927 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.927 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.927 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.927 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.185 [2024-12-05 21:12:28.074489] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:19:20.185 [2024-12-05 21:12:28.074537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1324919 ] 00:19:20.185 [2024-12-05 21:12:28.148320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.185 [2024-12-05 21:12:28.189764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.185 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.185 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:20.185 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yMeXSarFbr 00:19:20.444 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:20.703 [2024-12-05 21:12:28.616939] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:20.703 TLSTESTn1 00:19:20.703 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:20.703 Running I/O for 10 seconds... 00:19:23.013 5463.00 IOPS, 21.34 MiB/s [2024-12-05T20:12:32.101Z] 5473.00 IOPS, 21.38 MiB/s [2024-12-05T20:12:33.107Z] 5485.67 IOPS, 21.43 MiB/s [2024-12-05T20:12:34.043Z] 5505.75 IOPS, 21.51 MiB/s [2024-12-05T20:12:34.977Z] 5518.20 IOPS, 21.56 MiB/s [2024-12-05T20:12:35.908Z] 5532.83 IOPS, 21.61 MiB/s [2024-12-05T20:12:36.838Z] 5543.43 IOPS, 21.65 MiB/s [2024-12-05T20:12:38.209Z] 5555.12 IOPS, 21.70 MiB/s [2024-12-05T20:12:39.142Z] 5559.11 IOPS, 21.72 MiB/s [2024-12-05T20:12:39.142Z] 5552.40 IOPS, 21.69 MiB/s 00:19:31.034 Latency(us) 00:19:31.034 [2024-12-05T20:12:39.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.034 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:31.034 Verification LBA range: start 0x0 length 0x2000 00:19:31.034 TLSTESTn1 : 10.01 5557.99 21.71 0.00 0.00 22996.28 5336.50 25839.91 00:19:31.034 [2024-12-05T20:12:39.142Z] =================================================================================================================== 00:19:31.034 [2024-12-05T20:12:39.142Z] Total : 5557.99 21.71 0.00 0.00 22996.28 5336.50 25839.91 00:19:31.034 { 00:19:31.034 "results": [ 00:19:31.034 { 00:19:31.034 "job": "TLSTESTn1", 00:19:31.034 "core_mask": "0x4", 00:19:31.034 "workload": "verify", 00:19:31.034 "status": "finished", 00:19:31.034 "verify_range": { 00:19:31.034 "start": 0, 00:19:31.034 "length": 8192 00:19:31.034 }, 00:19:31.034 "queue_depth": 128, 00:19:31.034 "io_size": 4096, 00:19:31.034 "runtime": 10.012789, 00:19:31.034 "iops": 5557.991884179323, 00:19:31.034 "mibps": 21.71090579757548, 00:19:31.034 "io_failed": 0, 00:19:31.034 "io_timeout": 0, 00:19:31.034 "avg_latency_us": 22996.27543806597, 00:19:31.034 "min_latency_us": 5336.5028571428575, 00:19:31.034 "max_latency_us": 25839.908571428572 00:19:31.034 } 00:19:31.034 ], 00:19:31.034 "core_count": 1 00:19:31.034 } 00:19:31.034 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:31.034 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1324919 00:19:31.034 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1324919 ']' 00:19:31.034 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1324919 00:19:31.034 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:31.034 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.034 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1324919 00:19:31.034 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:31.034 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:31.034 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1324919' 00:19:31.034 killing process with pid 1324919 00:19:31.034 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1324919 00:19:31.034 Received shutdown signal, test time was about 10.000000 seconds 00:19:31.034 00:19:31.034 Latency(us) 00:19:31.034 [2024-12-05T20:12:39.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.034 [2024-12-05T20:12:39.142Z] =================================================================================================================== 00:19:31.034 [2024-12-05T20:12:39.142Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:31.034 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1324919 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tTvXbKesto 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tTvXbKesto 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tTvXbKesto 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tTvXbKesto 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1326752 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1326752 /var/tmp/bdevperf.sock 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1326752 ']' 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.034 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.034 [2024-12-05 21:12:39.085506] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:19:31.034 [2024-12-05 21:12:39.085555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326752 ] 00:19:31.292 [2024-12-05 21:12:39.146294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.292 [2024-12-05 21:12:39.182872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.292 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.292 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:31.292 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tTvXbKesto 00:19:31.549 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:31.549 [2024-12-05 21:12:39.642168] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.549 [2024-12-05 21:12:39.646901] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:31.549 [2024-12-05 21:12:39.647546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x758dc0 (107): Transport endpoint is not connected 00:19:31.549 [2024-12-05 21:12:39.648538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x758dc0 (9): Bad file descriptor 00:19:31.549 [2024-12-05 21:12:39.649539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:31.549 [2024-12-05 21:12:39.649550] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:31.549 [2024-12-05 21:12:39.649558] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:31.549 [2024-12-05 21:12:39.649568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:31.549 request: 00:19:31.549 { 00:19:31.549 "name": "TLSTEST", 00:19:31.549 "trtype": "tcp", 00:19:31.549 "traddr": "10.0.0.2", 00:19:31.549 "adrfam": "ipv4", 00:19:31.549 "trsvcid": "4420", 00:19:31.549 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.549 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.549 "prchk_reftag": false, 00:19:31.549 "prchk_guard": false, 00:19:31.549 "hdgst": false, 00:19:31.549 "ddgst": false, 00:19:31.549 "psk": "key0", 00:19:31.549 "allow_unrecognized_csi": false, 00:19:31.549 "method": "bdev_nvme_attach_controller", 00:19:31.549 "req_id": 1 00:19:31.549 } 00:19:31.549 Got JSON-RPC error response 00:19:31.549 response: 00:19:31.549 { 00:19:31.549 "code": -5, 00:19:31.549 "message": "Input/output error" 00:19:31.549 } 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1326752 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1326752 ']' 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1326752 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1326752 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1326752' 00:19:31.807 killing process with pid 1326752 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1326752 00:19:31.807 Received shutdown signal, test time was about 10.000000 seconds 00:19:31.807 00:19:31.807 Latency(us) 00:19:31.807 [2024-12-05T20:12:39.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.807 [2024-12-05T20:12:39.915Z] =================================================================================================================== 00:19:31.807 [2024-12-05T20:12:39.915Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1326752 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yMeXSarFbr 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yMeXSarFbr 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yMeXSarFbr 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yMeXSarFbr 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1326788 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1326788 /var/tmp/bdevperf.sock 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1326788 ']' 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.807 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.808 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.808 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.065 [2024-12-05 21:12:39.933213] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:19:32.065 [2024-12-05 21:12:39.933263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326788 ] 00:19:32.065 [2024-12-05 21:12:40.007133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.065 [2024-12-05 21:12:40.048906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.065 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.065 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:32.065 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yMeXSarFbr 00:19:32.322 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:32.580 [2024-12-05 21:12:40.517939] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.580 [2024-12-05 21:12:40.526379] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:32.580 [2024-12-05 21:12:40.526403] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:32.580 [2024-12-05 21:12:40.526428] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:32.580 [2024-12-05 21:12:40.527339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1327dc0 (107): Transport endpoint is not connected 00:19:32.580 [2024-12-05 21:12:40.528332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1327dc0 (9): Bad file descriptor 00:19:32.580 [2024-12-05 21:12:40.529334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:32.580 [2024-12-05 21:12:40.529344] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:32.580 [2024-12-05 21:12:40.529351] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:32.580 [2024-12-05 21:12:40.529361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:32.580 request: 00:19:32.580 { 00:19:32.580 "name": "TLSTEST", 00:19:32.580 "trtype": "tcp", 00:19:32.580 "traddr": "10.0.0.2", 00:19:32.580 "adrfam": "ipv4", 00:19:32.580 "trsvcid": "4420", 00:19:32.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.580 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:32.580 "prchk_reftag": false, 00:19:32.580 "prchk_guard": false, 00:19:32.580 "hdgst": false, 00:19:32.580 "ddgst": false, 00:19:32.580 "psk": "key0", 00:19:32.580 "allow_unrecognized_csi": false, 00:19:32.580 "method": "bdev_nvme_attach_controller", 00:19:32.580 "req_id": 1 00:19:32.580 } 00:19:32.580 Got JSON-RPC error response 00:19:32.580 response: 00:19:32.580 { 00:19:32.580 "code": -5, 00:19:32.580 "message": "Input/output error" 00:19:32.580 } 00:19:32.580 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1326788 00:19:32.580 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1326788 ']' 00:19:32.580 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1326788 00:19:32.581 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:32.581 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.581 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1326788 00:19:32.581 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:32.581 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:32.581 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1326788' 00:19:32.581 killing process with pid 1326788 00:19:32.581 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1326788 00:19:32.581 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.581 00:19:32.581 Latency(us) 00:19:32.581 [2024-12-05T20:12:40.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.581 [2024-12-05T20:12:40.689Z] =================================================================================================================== 00:19:32.581 [2024-12-05T20:12:40.689Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:32.581 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1326788 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yMeXSarFbr 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yMeXSarFbr 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yMeXSarFbr 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yMeXSarFbr 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1327007 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1327007 /var/tmp/bdevperf.sock 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1327007 ']' 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.839 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.839 [2024-12-05 21:12:40.813375] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:19:32.839 [2024-12-05 21:12:40.813426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327007 ] 00:19:32.839 [2024-12-05 21:12:40.887445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.839 [2024-12-05 21:12:40.923648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.098 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.098 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:33.098 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yMeXSarFbr 00:19:33.356 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:33.356 [2024-12-05 21:12:41.383271] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.356 [2024-12-05 21:12:41.388613] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:33.356 [2024-12-05 21:12:41.388636] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:33.356 [2024-12-05 21:12:41.388662] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:33.357 [2024-12-05 21:12:41.388722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215ddc0 (107): Transport endpoint is not connected 00:19:33.357 [2024-12-05 21:12:41.389714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215ddc0 (9): Bad file descriptor 00:19:33.357 [2024-12-05 21:12:41.390716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:33.357 [2024-12-05 21:12:41.390727] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:33.357 [2024-12-05 21:12:41.390734] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:33.357 [2024-12-05 21:12:41.390745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:33.357 request: 00:19:33.357 { 00:19:33.357 "name": "TLSTEST", 00:19:33.357 "trtype": "tcp", 00:19:33.357 "traddr": "10.0.0.2", 00:19:33.357 "adrfam": "ipv4", 00:19:33.357 "trsvcid": "4420", 00:19:33.357 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:33.357 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.357 "prchk_reftag": false, 00:19:33.357 "prchk_guard": false, 00:19:33.357 "hdgst": false, 00:19:33.357 "ddgst": false, 00:19:33.357 "psk": "key0", 00:19:33.357 "allow_unrecognized_csi": false, 00:19:33.357 "method": "bdev_nvme_attach_controller", 00:19:33.357 "req_id": 1 00:19:33.357 } 00:19:33.357 Got JSON-RPC error response 00:19:33.357 response: 00:19:33.357 { 00:19:33.357 "code": -5, 00:19:33.357 "message": "Input/output error" 00:19:33.357 } 00:19:33.357 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1327007 00:19:33.357 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1327007 ']' 00:19:33.357 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1327007 00:19:33.357 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:33.357 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.357 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1327007 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1327007' 00:19:33.616 killing process with pid 1327007 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1327007 00:19:33.616 Received shutdown signal, test time was about 10.000000 seconds 00:19:33.616 00:19:33.616 Latency(us) 00:19:33.616 [2024-12-05T20:12:41.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.616 [2024-12-05T20:12:41.724Z] =================================================================================================================== 00:19:33.616 [2024-12-05T20:12:41.724Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1327007 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1327228 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1327228 /var/tmp/bdevperf.sock 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1327228 ']' 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.616 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.616 [2024-12-05 21:12:41.675620] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:19:33.616 [2024-12-05 21:12:41.675671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327228 ] 00:19:33.875 [2024-12-05 21:12:41.745751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.875 [2024-12-05 21:12:41.785677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.875 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.875 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:33.875 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:34.132 [2024-12-05 21:12:42.045398] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:34.132 [2024-12-05 21:12:42.045433] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:34.132 request: 00:19:34.132 { 00:19:34.132 "name": "key0", 00:19:34.132 "path": "", 00:19:34.133 "method": "keyring_file_add_key", 00:19:34.133 "req_id": 1 00:19:34.133 } 00:19:34.133 Got JSON-RPC error response 00:19:34.133 response: 00:19:34.133 { 00:19:34.133 "code": -1, 00:19:34.133 "message": "Operation not permitted" 00:19:34.133 } 00:19:34.133 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:34.391 [2024-12-05 21:12:42.241999] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:34.391 [2024-12-05 21:12:42.242044] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:34.391 request: 00:19:34.391 { 00:19:34.391 "name": "TLSTEST", 00:19:34.391 "trtype": "tcp", 00:19:34.391 "traddr": "10.0.0.2", 00:19:34.391 "adrfam": "ipv4", 00:19:34.391 "trsvcid": "4420", 00:19:34.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:34.391 "prchk_reftag": false, 00:19:34.391 "prchk_guard": false, 00:19:34.391 "hdgst": false, 00:19:34.391 "ddgst": false, 00:19:34.391 "psk": "key0", 00:19:34.391 "allow_unrecognized_csi": false, 00:19:34.391 "method": "bdev_nvme_attach_controller", 00:19:34.391 "req_id": 1 00:19:34.391 } 00:19:34.391 Got JSON-RPC error response 00:19:34.391 response: 00:19:34.391 { 00:19:34.391 "code": -126, 00:19:34.391 "message": "Required key not available" 00:19:34.391 } 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1327228 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1327228 ']' 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1327228 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1327228 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1327228' 00:19:34.391 killing process with pid 1327228 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1327228 00:19:34.391 Received shutdown signal, test time was about 10.000000 seconds 00:19:34.391 00:19:34.391 Latency(us) 00:19:34.391 [2024-12-05T20:12:42.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.391 [2024-12-05T20:12:42.499Z] =================================================================================================================== 00:19:34.391 [2024-12-05T20:12:42.499Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1327228 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1322569 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1322569 ']' 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1322569 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.391 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1322569 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1322569' 00:19:34.651 killing process with pid 1322569 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1322569 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1322569 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.aY9AcHXBES 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.aY9AcHXBES 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1327308 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1327308 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1327308 ']' 00:19:34.651 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.652 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.652 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.652 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.652 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.912 [2024-12-05 21:12:42.793952] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:19:34.912 [2024-12-05 21:12:42.794001] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.912 [2024-12-05 21:12:42.856544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.912 [2024-12-05 21:12:42.896801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.912 [2024-12-05 21:12:42.896836] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.912 [2024-12-05 21:12:42.896843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.912 [2024-12-05 21:12:42.896850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.912 [2024-12-05 21:12:42.896855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.912 [2024-12-05 21:12:42.897431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.912 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.912 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:34.912 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:34.912 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:34.912 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.172 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.172 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.aY9AcHXBES 00:19:35.172 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.aY9AcHXBES 00:19:35.172 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:35.172 [2024-12-05 21:12:43.206091] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.172 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:35.431 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:35.690 [2024-12-05 21:12:43.603103] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:35.690 [2024-12-05 21:12:43.603313] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.690 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:35.948 malloc0 00:19:35.948 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:35.948 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.aY9AcHXBES 00:19:36.206 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:36.464 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aY9AcHXBES 00:19:36.464 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:36.464 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:36.464 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:36.464 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aY9AcHXBES 00:19:36.464 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:36.464 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1327733 00:19:36.464 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.464 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.464 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1327733 /var/tmp/bdevperf.sock 00:19:36.464 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1327733 ']' 00:19:36.464 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.464 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.464 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.464 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.464 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.464 [2024-12-05 21:12:44.488222] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:19:36.464 [2024-12-05 21:12:44.488274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327733 ] 00:19:36.464 [2024-12-05 21:12:44.563307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.721 [2024-12-05 21:12:44.603726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.721 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.721 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:36.721 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aY9AcHXBES 00:19:36.979 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:36.979 [2024-12-05 21:12:45.047811] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.236 TLSTESTn1 00:19:37.236 21:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:37.236 Running I/O for 10 seconds... 00:19:39.542 5444.00 IOPS, 21.27 MiB/s [2024-12-05T20:12:48.582Z] 5493.50 IOPS, 21.46 MiB/s [2024-12-05T20:12:49.514Z] 5532.67 IOPS, 21.61 MiB/s [2024-12-05T20:12:50.446Z] 5518.25 IOPS, 21.56 MiB/s [2024-12-05T20:12:51.375Z] 5520.80 IOPS, 21.57 MiB/s [2024-12-05T20:12:52.305Z] 5516.17 IOPS, 21.55 MiB/s [2024-12-05T20:12:53.676Z] 5525.71 IOPS, 21.58 MiB/s [2024-12-05T20:12:54.607Z] 5535.25 IOPS, 21.62 MiB/s [2024-12-05T20:12:55.539Z] 5545.11 IOPS, 21.66 MiB/s [2024-12-05T20:12:55.539Z] 5553.60 IOPS, 21.69 MiB/s 00:19:47.431 Latency(us) 00:19:47.431 [2024-12-05T20:12:55.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.431 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:47.431 Verification LBA range: start 0x0 length 0x2000 00:19:47.431 TLSTESTn1 : 10.01 5559.07 21.72 0.00 0.00 22992.06 4962.01 25590.25 00:19:47.431 [2024-12-05T20:12:55.539Z] =================================================================================================================== 00:19:47.431 [2024-12-05T20:12:55.539Z] Total : 5559.07 21.72 0.00 0.00 22992.06 4962.01 25590.25 00:19:47.431 { 00:19:47.431 "results": [ 00:19:47.432 { 00:19:47.432 "job": "TLSTESTn1", 00:19:47.432 "core_mask": "0x4", 00:19:47.432 "workload": "verify", 00:19:47.432 "status": "finished", 00:19:47.432 "verify_range": { 00:19:47.432 "start": 0, 00:19:47.432 "length": 8192 00:19:47.432 }, 00:19:47.432 "queue_depth": 128, 00:19:47.432 "io_size": 4096, 00:19:47.432 "runtime": 10.012834, 00:19:47.432 "iops": 5559.065495343277, 00:19:47.432 "mibps": 21.715099591184675, 00:19:47.432 "io_failed": 0, 00:19:47.432 "io_timeout": 0, 00:19:47.432 "avg_latency_us": 22992.06363633564, 00:19:47.432 "min_latency_us": 4962.011428571429, 00:19:47.432 "max_latency_us": 25590.24761904762 00:19:47.432 } 00:19:47.432 ], 00:19:47.432 "core_count": 1 00:19:47.432 } 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1327733 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1327733 ']' 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1327733 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1327733 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1327733' 00:19:47.432 killing process with pid 1327733 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1327733 00:19:47.432 Received shutdown signal, test time was about 10.000000 seconds 00:19:47.432 00:19:47.432 Latency(us) 00:19:47.432 [2024-12-05T20:12:55.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.432 [2024-12-05T20:12:55.540Z] =================================================================================================================== 00:19:47.432 [2024-12-05T20:12:55.540Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1327733 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.aY9AcHXBES 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aY9AcHXBES 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aY9AcHXBES 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aY9AcHXBES 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aY9AcHXBES 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1329364 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1329364 /var/tmp/bdevperf.sock 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1329364 ']' 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.432 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.690 [2024-12-05 21:12:55.556257] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:19:47.690 [2024-12-05 21:12:55.556308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329364 ] 00:19:47.690 [2024-12-05 21:12:55.631856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.690 [2024-12-05 21:12:55.671720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.690 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.690 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:47.690 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aY9AcHXBES 00:19:47.947 [2024-12-05 21:12:55.951408] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.aY9AcHXBES': 0100666 00:19:47.947 [2024-12-05 21:12:55.951442] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:47.947 request: 00:19:47.947 { 00:19:47.947 "name": "key0", 00:19:47.947 "path": "/tmp/tmp.aY9AcHXBES", 00:19:47.947 "method": "keyring_file_add_key", 00:19:47.947 "req_id": 1 00:19:47.947 } 00:19:47.947 Got JSON-RPC error response 00:19:47.947 response: 00:19:47.947 { 00:19:47.947 "code": -1, 00:19:47.947 "message": "Operation not permitted" 00:19:47.947 } 00:19:47.947 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.204 [2024-12-05 21:12:56.143985] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:48.204 [2024-12-05 21:12:56.144019] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:48.204 request: 00:19:48.204 { 00:19:48.204 "name": "TLSTEST", 00:19:48.204 "trtype": "tcp", 00:19:48.204 "traddr": "10.0.0.2", 00:19:48.204 "adrfam": "ipv4", 00:19:48.204 "trsvcid": "4420", 00:19:48.204 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.204 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.204 "prchk_reftag": false, 00:19:48.204 "prchk_guard": false, 00:19:48.204 "hdgst": false, 00:19:48.204 "ddgst": false, 00:19:48.204 "psk": "key0", 00:19:48.204 "allow_unrecognized_csi": false, 00:19:48.204 "method": "bdev_nvme_attach_controller", 00:19:48.204 "req_id": 1 00:19:48.204 } 00:19:48.204 Got JSON-RPC error response 00:19:48.204 response: 00:19:48.204 { 00:19:48.204 "code": -126, 00:19:48.204 "message": "Required key not available" 00:19:48.204 } 00:19:48.204 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1329364 00:19:48.204 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1329364 ']' 00:19:48.204 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1329364 00:19:48.204 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:48.204 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.204 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1329364 00:19:48.204 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:48.204 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:48.204 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1329364' 00:19:48.204 killing process with pid 1329364 00:19:48.204 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1329364 00:19:48.204 Received shutdown signal, test time was about 10.000000 seconds 00:19:48.204 00:19:48.204 Latency(us) 00:19:48.204 [2024-12-05T20:12:56.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.204 [2024-12-05T20:12:56.312Z] =================================================================================================================== 00:19:48.204 [2024-12-05T20:12:56.312Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:48.204 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1329364 00:19:48.462 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:48.462 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:48.462 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:48.462 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:48.462 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:48.462 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1327308 00:19:48.462 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1327308 ']' 00:19:48.462 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1327308 00:19:48.462 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:48.462 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.462 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1327308 00:19:48.462 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:48.462 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:48.462 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1327308' 00:19:48.462 killing process with pid 1327308 00:19:48.462 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1327308 00:19:48.462 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1327308 00:19:48.720 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:48.720 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:48.720 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:48.720 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.720 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1329604 00:19:48.720 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:48.720 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1329604 00:19:48.720 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1329604 ']' 00:19:48.720 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.720 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.720 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.720 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.720 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.720 [2024-12-05 21:12:56.661871] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:19:48.720 [2024-12-05 21:12:56.661920] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.720 [2024-12-05 21:12:56.741839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.720 [2024-12-05 21:12:56.781399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.720 [2024-12-05 21:12:56.781435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.720 [2024-12-05 21:12:56.781443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.720 [2024-12-05 21:12:56.781450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.720 [2024-12-05 21:12:56.781455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.720 [2024-12-05 21:12:56.781988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.979 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.979 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:48.979 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:48.979 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:48.979 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.979 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.979 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.aY9AcHXBES 00:19:48.979 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:48.979 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.aY9AcHXBES 00:19:48.979 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:48.979 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.979 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:48.979 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.979 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.aY9AcHXBES 00:19:48.979 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.aY9AcHXBES 00:19:48.979 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:49.238 [2024-12-05 21:12:57.097345] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.238 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:49.238 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:49.496 [2024-12-05 21:12:57.474300] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.496 [2024-12-05 21:12:57.474525] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.496 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:49.754 malloc0 00:19:49.754 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:49.754 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.aY9AcHXBES 00:19:50.012 [2024-12-05 21:12:58.019844] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.aY9AcHXBES': 0100666 00:19:50.012 [2024-12-05 21:12:58.019872] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:50.012 request: 00:19:50.012 { 00:19:50.012 "name": "key0", 00:19:50.012 "path": "/tmp/tmp.aY9AcHXBES", 00:19:50.012 "method": "keyring_file_add_key", 00:19:50.012 "req_id": 1 00:19:50.012 } 00:19:50.012 Got JSON-RPC error response 00:19:50.012 response: 00:19:50.012 { 00:19:50.012 "code": -1, 00:19:50.012 "message": "Operation not permitted" 00:19:50.012 } 00:19:50.012 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:50.271 [2024-12-05 21:12:58.208349] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:50.271 [2024-12-05 21:12:58.208385] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:50.271 request: 00:19:50.271 { 00:19:50.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.271 "host": "nqn.2016-06.io.spdk:host1", 00:19:50.271 "psk": "key0", 00:19:50.271 "method": "nvmf_subsystem_add_host", 00:19:50.271 "req_id": 1 00:19:50.271 } 00:19:50.271 Got JSON-RPC error response 00:19:50.271 response: 00:19:50.271 { 00:19:50.271 "code": -32603, 00:19:50.271 "message": "Internal error" 00:19:50.271 } 00:19:50.271 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:50.271 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:50.271 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:50.271 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:50.271 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1329604 00:19:50.271 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1329604 ']' 00:19:50.271 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1329604 00:19:50.271 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:50.271 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.271 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1329604 00:19:50.271 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:50.271 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:50.271 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1329604' 00:19:50.271 killing process with pid 1329604 00:19:50.271 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1329604 00:19:50.271 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1329604 00:19:50.529 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.aY9AcHXBES 00:19:50.529 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:50.529 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:50.529 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:50.529 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.529 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1329948 00:19:50.529 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:50.529 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1329948 00:19:50.529 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1329948 ']' 00:19:50.529 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.529 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.529 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.529 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.529 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.529 [2024-12-05 21:12:58.511413] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:19:50.529 [2024-12-05 21:12:58.511460] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.529 [2024-12-05 21:12:58.587388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.529 [2024-12-05 21:12:58.627227] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.529 [2024-12-05 21:12:58.627264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.529 [2024-12-05 21:12:58.627272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.529 [2024-12-05 21:12:58.627278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.530 [2024-12-05 21:12:58.627283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.530 [2024-12-05 21:12:58.627846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.788 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.788 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:50.788 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:50.788 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:50.788 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.788 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.788 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.aY9AcHXBES 00:19:50.788 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.aY9AcHXBES 00:19:50.788 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:51.045 [2024-12-05 21:12:58.927667] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.045 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:51.045 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:51.302 [2024-12-05 21:12:59.292608] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:51.302 [2024-12-05 21:12:59.292815] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.302 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:51.560 malloc0 00:19:51.560 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:51.817 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.aY9AcHXBES 00:19:51.817 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:52.074 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1330303 00:19:52.074 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:52.074 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:52.074 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1330303 /var/tmp/bdevperf.sock 00:19:52.074 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1330303 ']' 00:19:52.074 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:52.074 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.074 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:52.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:52.074 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.075 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.075 [2024-12-05 21:13:00.100973] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:19:52.075 [2024-12-05 21:13:00.101024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1330303 ] 00:19:52.075 [2024-12-05 21:13:00.172825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.332 [2024-12-05 21:13:00.215721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.332 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.332 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:52.332 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aY9AcHXBES 00:19:52.590 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:52.590 [2024-12-05 21:13:00.679449] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:52.847 TLSTESTn1 00:19:52.847 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:53.105 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:53.105 "subsystems": [ 00:19:53.105 { 00:19:53.105 "subsystem": "keyring", 00:19:53.105 "config": [ 00:19:53.105 { 00:19:53.105 "method": "keyring_file_add_key", 00:19:53.105 "params": { 00:19:53.105 "name": "key0", 00:19:53.105 "path": "/tmp/tmp.aY9AcHXBES" 00:19:53.105 } 00:19:53.105 } 00:19:53.105 ] 00:19:53.105 }, 00:19:53.105 { 00:19:53.105 "subsystem": "iobuf", 00:19:53.105 "config": [ 00:19:53.105 { 00:19:53.105 "method": "iobuf_set_options", 00:19:53.106 "params": { 00:19:53.106 "small_pool_count": 8192, 00:19:53.106 "large_pool_count": 1024, 00:19:53.106 "small_bufsize": 8192, 00:19:53.106 "large_bufsize": 135168, 00:19:53.106 "enable_numa": false 00:19:53.106 } 00:19:53.106 } 00:19:53.106 ] 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "subsystem": "sock", 00:19:53.106 "config": [ 00:19:53.106 { 00:19:53.106 "method": "sock_set_default_impl", 00:19:53.106 "params": { 00:19:53.106 "impl_name": "posix" 00:19:53.106 } 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "method": "sock_impl_set_options", 00:19:53.106 "params": { 00:19:53.106 "impl_name": "ssl", 00:19:53.106 "recv_buf_size": 4096, 00:19:53.106 "send_buf_size": 4096, 00:19:53.106 "enable_recv_pipe": true, 00:19:53.106 "enable_quickack": false, 00:19:53.106 "enable_placement_id": 0, 00:19:53.106 "enable_zerocopy_send_server": true, 00:19:53.106 "enable_zerocopy_send_client": false, 00:19:53.106 "zerocopy_threshold": 0, 00:19:53.106 "tls_version": 0, 00:19:53.106 "enable_ktls": false 00:19:53.106 } 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "method": "sock_impl_set_options", 00:19:53.106 "params": { 00:19:53.106 "impl_name": "posix", 00:19:53.106 "recv_buf_size": 2097152, 00:19:53.106 "send_buf_size": 2097152, 00:19:53.106 "enable_recv_pipe": true, 00:19:53.106 "enable_quickack": false, 00:19:53.106 "enable_placement_id": 0, 00:19:53.106 "enable_zerocopy_send_server": true, 00:19:53.106 "enable_zerocopy_send_client": false, 00:19:53.106 "zerocopy_threshold": 0, 00:19:53.106 "tls_version": 0, 00:19:53.106 "enable_ktls": false 00:19:53.106 } 00:19:53.106 } 00:19:53.106 ] 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "subsystem": "vmd", 00:19:53.106 "config": [] 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "subsystem": "accel", 00:19:53.106 "config": [ 00:19:53.106 { 00:19:53.106 "method": "accel_set_options", 00:19:53.106 "params": { 00:19:53.106 "small_cache_size": 128, 00:19:53.106 "large_cache_size": 16, 00:19:53.106 "task_count": 2048, 00:19:53.106 "sequence_count": 2048, 00:19:53.106 "buf_count": 2048 00:19:53.106 } 00:19:53.106 } 00:19:53.106 ] 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "subsystem": "bdev", 00:19:53.106 "config": [ 00:19:53.106 { 00:19:53.106 "method": "bdev_set_options", 00:19:53.106 "params": { 00:19:53.106 "bdev_io_pool_size": 65535, 00:19:53.106 "bdev_io_cache_size": 256, 00:19:53.106 "bdev_auto_examine": true, 00:19:53.106 "iobuf_small_cache_size": 128, 00:19:53.106 "iobuf_large_cache_size": 16 00:19:53.106 } 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "method": "bdev_raid_set_options", 00:19:53.106 "params": { 00:19:53.106 "process_window_size_kb": 1024, 00:19:53.106 "process_max_bandwidth_mb_sec": 0 00:19:53.106 } 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "method": "bdev_iscsi_set_options", 00:19:53.106 "params": { 00:19:53.106 "timeout_sec": 30 00:19:53.106 } 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "method": "bdev_nvme_set_options", 00:19:53.106 "params": { 00:19:53.106 "action_on_timeout": "none", 00:19:53.106 "timeout_us": 0, 00:19:53.106 "timeout_admin_us": 0, 00:19:53.106 "keep_alive_timeout_ms": 10000, 00:19:53.106 "arbitration_burst": 0, 00:19:53.106 "low_priority_weight": 0, 00:19:53.106 "medium_priority_weight": 0, 00:19:53.106 "high_priority_weight": 0, 00:19:53.106 "nvme_adminq_poll_period_us": 10000, 00:19:53.106 "nvme_ioq_poll_period_us": 0, 00:19:53.106 "io_queue_requests": 0, 00:19:53.106 "delay_cmd_submit": true, 00:19:53.106 "transport_retry_count": 4, 00:19:53.106 "bdev_retry_count": 3, 00:19:53.106 "transport_ack_timeout": 0, 00:19:53.106 "ctrlr_loss_timeout_sec": 0, 00:19:53.106 "reconnect_delay_sec": 0, 00:19:53.106 "fast_io_fail_timeout_sec": 0, 00:19:53.106 "disable_auto_failback": false, 00:19:53.106 "generate_uuids": false, 00:19:53.106 "transport_tos": 0, 00:19:53.106 "nvme_error_stat": false, 00:19:53.106 "rdma_srq_size": 0, 00:19:53.106 "io_path_stat": false, 00:19:53.106 "allow_accel_sequence": false, 00:19:53.106 "rdma_max_cq_size": 0, 00:19:53.106 "rdma_cm_event_timeout_ms": 0, 00:19:53.106 "dhchap_digests": [ 00:19:53.106 "sha256", 00:19:53.106 "sha384", 00:19:53.106 "sha512" 00:19:53.106 ], 00:19:53.106 "dhchap_dhgroups": [ 00:19:53.106 "null", 00:19:53.106 "ffdhe2048", 00:19:53.106 "ffdhe3072", 00:19:53.106 "ffdhe4096", 00:19:53.106 "ffdhe6144", 00:19:53.106 "ffdhe8192" 00:19:53.106 ] 00:19:53.106 } 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "method": "bdev_nvme_set_hotplug", 00:19:53.106 "params": { 00:19:53.106 "period_us": 100000, 00:19:53.106 "enable": false 00:19:53.106 } 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "method": "bdev_malloc_create", 00:19:53.106 "params": { 00:19:53.106 "name": "malloc0", 00:19:53.106 "num_blocks": 8192, 00:19:53.106 "block_size": 4096, 00:19:53.106 "physical_block_size": 4096, 00:19:53.106 "uuid": "ed6db019-626c-4087-ba8d-a484a22ffd6a", 00:19:53.106 "optimal_io_boundary": 0, 00:19:53.106 "md_size": 0, 00:19:53.106 "dif_type": 0, 00:19:53.106 "dif_is_head_of_md": false, 00:19:53.106 "dif_pi_format": 0 00:19:53.106 } 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "method": "bdev_wait_for_examine" 00:19:53.106 } 00:19:53.106 ] 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "subsystem": "nbd", 00:19:53.106 "config": [] 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "subsystem": "scheduler", 00:19:53.106 "config": [ 00:19:53.106 { 00:19:53.106 "method": "framework_set_scheduler", 00:19:53.106 "params": { 00:19:53.106 "name": "static" 00:19:53.106 } 00:19:53.106 } 00:19:53.106 ] 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "subsystem": "nvmf", 00:19:53.106 "config": [ 00:19:53.106 { 00:19:53.106 "method": "nvmf_set_config", 00:19:53.106 "params": { 00:19:53.106 "discovery_filter": "match_any", 00:19:53.106 "admin_cmd_passthru": { 00:19:53.106 "identify_ctrlr": false 00:19:53.106 }, 00:19:53.106 "dhchap_digests": [ 00:19:53.106 "sha256", 00:19:53.106 "sha384", 00:19:53.106 "sha512" 00:19:53.106 ], 00:19:53.106 "dhchap_dhgroups": [ 00:19:53.106 "null", 00:19:53.106 "ffdhe2048", 00:19:53.106 "ffdhe3072", 00:19:53.106 "ffdhe4096", 00:19:53.106 "ffdhe6144", 00:19:53.106 "ffdhe8192" 00:19:53.106 ] 00:19:53.106 } 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "method": "nvmf_set_max_subsystems", 00:19:53.106 "params": { 00:19:53.106 "max_subsystems": 1024 00:19:53.106 } 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "method": "nvmf_set_crdt", 00:19:53.106 "params": { 00:19:53.106 "crdt1": 0, 00:19:53.106 "crdt2": 0, 00:19:53.106 "crdt3": 0 00:19:53.106 } 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "method": "nvmf_create_transport", 00:19:53.106 "params": { 00:19:53.106 "trtype": "TCP", 00:19:53.106 "max_queue_depth": 128, 00:19:53.106 "max_io_qpairs_per_ctrlr": 127, 00:19:53.106 "in_capsule_data_size": 4096, 00:19:53.106 "max_io_size": 131072, 00:19:53.106 "io_unit_size": 131072, 00:19:53.106 "max_aq_depth": 128, 00:19:53.106 "num_shared_buffers": 511, 00:19:53.106 "buf_cache_size": 4294967295, 00:19:53.106 "dif_insert_or_strip": false, 00:19:53.106 "zcopy": false, 00:19:53.106 "c2h_success": false, 00:19:53.106 "sock_priority": 0, 00:19:53.106 "abort_timeout_sec": 1, 00:19:53.106 "ack_timeout": 0, 00:19:53.106 "data_wr_pool_size": 0 00:19:53.106 } 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "method": "nvmf_create_subsystem", 00:19:53.106 "params": { 00:19:53.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.106 "allow_any_host": false, 00:19:53.106 "serial_number": "SPDK00000000000001", 00:19:53.106 "model_number": "SPDK bdev Controller", 00:19:53.106 "max_namespaces": 10, 00:19:53.106 "min_cntlid": 1, 00:19:53.106 "max_cntlid": 65519, 00:19:53.106 "ana_reporting": false 00:19:53.106 } 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "method": "nvmf_subsystem_add_host", 00:19:53.106 "params": { 00:19:53.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.106 "host": "nqn.2016-06.io.spdk:host1", 00:19:53.106 "psk": "key0" 00:19:53.106 } 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "method": "nvmf_subsystem_add_ns", 00:19:53.106 "params": { 00:19:53.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.106 "namespace": { 00:19:53.106 "nsid": 1, 00:19:53.106 "bdev_name": "malloc0", 00:19:53.106 "nguid": "ED6DB019626C4087BA8DA484A22FFD6A", 00:19:53.106 "uuid": "ed6db019-626c-4087-ba8d-a484a22ffd6a", 00:19:53.106 "no_auto_visible": false 00:19:53.106 } 00:19:53.106 } 00:19:53.106 }, 00:19:53.106 { 00:19:53.106 "method": "nvmf_subsystem_add_listener", 00:19:53.106 "params": { 00:19:53.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.106 "listen_address": { 00:19:53.106 "trtype": "TCP", 00:19:53.106 "adrfam": "IPv4", 00:19:53.106 "traddr": "10.0.0.2", 00:19:53.106 "trsvcid": "4420" 00:19:53.106 }, 00:19:53.106 "secure_channel": true 00:19:53.106 } 00:19:53.106 } 00:19:53.106 ] 00:19:53.106 } 00:19:53.106 ] 00:19:53.106 }' 00:19:53.106 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:53.365 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:53.365 "subsystems": [ 00:19:53.365 { 00:19:53.365 "subsystem": "keyring", 00:19:53.365 "config": [ 00:19:53.365 { 00:19:53.365 "method": "keyring_file_add_key", 00:19:53.365 "params": { 00:19:53.365 "name": "key0", 00:19:53.365 "path": "/tmp/tmp.aY9AcHXBES" 00:19:53.365 } 00:19:53.365 } 00:19:53.365 ] 00:19:53.365 }, 00:19:53.365 { 00:19:53.365 "subsystem": "iobuf", 00:19:53.365 "config": [ 00:19:53.365 { 00:19:53.365 "method": "iobuf_set_options", 00:19:53.365 "params": { 00:19:53.365 "small_pool_count": 8192, 00:19:53.365 "large_pool_count": 1024, 00:19:53.365 "small_bufsize": 8192, 00:19:53.365 "large_bufsize": 135168, 00:19:53.365 "enable_numa": false 00:19:53.365 } 00:19:53.365 } 00:19:53.365 ] 00:19:53.365 }, 00:19:53.365 { 00:19:53.365 "subsystem": "sock", 00:19:53.365 "config": [ 00:19:53.365 { 00:19:53.365 "method": "sock_set_default_impl", 00:19:53.365 "params": { 00:19:53.365 "impl_name": "posix" 00:19:53.365 } 00:19:53.365 }, 00:19:53.365 { 00:19:53.365 "method": "sock_impl_set_options", 00:19:53.365 "params": { 00:19:53.365 "impl_name": "ssl", 00:19:53.365 "recv_buf_size": 4096, 00:19:53.365 "send_buf_size": 4096, 00:19:53.365 "enable_recv_pipe": true, 00:19:53.365 "enable_quickack": false, 00:19:53.365 "enable_placement_id": 0, 00:19:53.365 "enable_zerocopy_send_server": true, 00:19:53.365 "enable_zerocopy_send_client": false, 00:19:53.365 "zerocopy_threshold": 0, 00:19:53.365 "tls_version": 0, 00:19:53.365 "enable_ktls": false 00:19:53.365 } 00:19:53.365 }, 00:19:53.365 { 00:19:53.365 "method": "sock_impl_set_options", 00:19:53.365 "params": { 00:19:53.365 "impl_name": "posix", 00:19:53.365 "recv_buf_size": 2097152, 00:19:53.365 "send_buf_size": 2097152, 00:19:53.365 "enable_recv_pipe": true, 00:19:53.365 "enable_quickack": false, 00:19:53.365 "enable_placement_id": 0, 00:19:53.365 "enable_zerocopy_send_server": true, 00:19:53.365 "enable_zerocopy_send_client": false, 00:19:53.365 "zerocopy_threshold": 0, 00:19:53.365 "tls_version": 0, 00:19:53.365 "enable_ktls": false 00:19:53.365 } 00:19:53.365 } 00:19:53.365 ] 00:19:53.365 }, 00:19:53.365 { 00:19:53.365 "subsystem": "vmd", 00:19:53.365 "config": [] 00:19:53.365 }, 00:19:53.365 { 00:19:53.365 "subsystem": "accel", 00:19:53.365 "config": [ 00:19:53.365 { 00:19:53.365 "method": "accel_set_options", 00:19:53.365 "params": { 00:19:53.365 "small_cache_size": 128, 00:19:53.365 "large_cache_size": 16, 00:19:53.365 "task_count": 2048, 00:19:53.365 "sequence_count": 2048, 00:19:53.365 "buf_count": 2048 00:19:53.365 } 00:19:53.365 } 00:19:53.365 ] 00:19:53.365 }, 00:19:53.365 { 00:19:53.365 "subsystem": "bdev", 00:19:53.365 "config": [ 00:19:53.365 { 00:19:53.365 "method": "bdev_set_options", 00:19:53.365 "params": { 00:19:53.365 "bdev_io_pool_size": 65535, 00:19:53.365 "bdev_io_cache_size": 256, 00:19:53.365 "bdev_auto_examine": true, 00:19:53.365 "iobuf_small_cache_size": 128, 00:19:53.365 "iobuf_large_cache_size": 16 00:19:53.365 } 00:19:53.365 }, 00:19:53.365 { 00:19:53.365 "method": "bdev_raid_set_options", 00:19:53.365 "params": { 00:19:53.365 "process_window_size_kb": 1024, 00:19:53.365 "process_max_bandwidth_mb_sec": 0 00:19:53.365 } 00:19:53.365 }, 00:19:53.365 { 00:19:53.365 "method": "bdev_iscsi_set_options", 00:19:53.365 "params": { 00:19:53.365 "timeout_sec": 30 00:19:53.365 } 00:19:53.365 }, 00:19:53.365 { 00:19:53.365 "method": "bdev_nvme_set_options", 00:19:53.365 "params": { 00:19:53.365 "action_on_timeout": "none", 00:19:53.365 "timeout_us": 0, 00:19:53.365 "timeout_admin_us": 0, 00:19:53.365 "keep_alive_timeout_ms": 10000, 00:19:53.365 "arbitration_burst": 0, 00:19:53.365 "low_priority_weight": 0, 00:19:53.365 "medium_priority_weight": 0, 00:19:53.365 "high_priority_weight": 0, 00:19:53.365 "nvme_adminq_poll_period_us": 10000, 00:19:53.365 "nvme_ioq_poll_period_us": 0, 00:19:53.365 "io_queue_requests": 512, 00:19:53.365 "delay_cmd_submit": true, 00:19:53.365 "transport_retry_count": 4, 00:19:53.365 "bdev_retry_count": 3, 00:19:53.365 "transport_ack_timeout": 0, 00:19:53.365 "ctrlr_loss_timeout_sec": 0, 00:19:53.365 "reconnect_delay_sec": 0, 00:19:53.365 "fast_io_fail_timeout_sec": 0, 00:19:53.365 "disable_auto_failback": false, 00:19:53.365 "generate_uuids": false, 00:19:53.365 "transport_tos": 0, 00:19:53.365 "nvme_error_stat": false, 00:19:53.365 "rdma_srq_size": 0, 00:19:53.365 "io_path_stat": false, 00:19:53.365 "allow_accel_sequence": false, 00:19:53.365 "rdma_max_cq_size": 0, 00:19:53.365 "rdma_cm_event_timeout_ms": 0, 00:19:53.365 "dhchap_digests": [ 00:19:53.365 "sha256", 00:19:53.365 "sha384", 00:19:53.365 "sha512" 00:19:53.365 ], 00:19:53.365 "dhchap_dhgroups": [ 00:19:53.365 "null", 00:19:53.365 "ffdhe2048", 00:19:53.365 "ffdhe3072", 00:19:53.365 "ffdhe4096", 00:19:53.366 "ffdhe6144", 00:19:53.366 "ffdhe8192" 00:19:53.366 ] 00:19:53.366 } 00:19:53.366 }, 00:19:53.366 { 00:19:53.366 "method": "bdev_nvme_attach_controller", 00:19:53.366 "params": { 00:19:53.366 "name": "TLSTEST", 00:19:53.366 "trtype": "TCP", 00:19:53.366 "adrfam": "IPv4", 00:19:53.366 "traddr": "10.0.0.2", 00:19:53.366 "trsvcid": "4420", 00:19:53.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.366 "prchk_reftag": false, 00:19:53.366 "prchk_guard": false, 00:19:53.366 "ctrlr_loss_timeout_sec": 0, 00:19:53.366 "reconnect_delay_sec": 0, 00:19:53.366 "fast_io_fail_timeout_sec": 0, 00:19:53.366 "psk": "key0", 00:19:53.366 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.366 "hdgst": false, 00:19:53.366 "ddgst": false, 00:19:53.366 "multipath": "multipath" 00:19:53.366 } 00:19:53.366 }, 00:19:53.366 { 00:19:53.366 "method": "bdev_nvme_set_hotplug", 00:19:53.366 "params": { 00:19:53.366 "period_us": 100000, 00:19:53.366 "enable": false 00:19:53.366 } 00:19:53.366 }, 00:19:53.366 { 00:19:53.366 "method": "bdev_wait_for_examine" 00:19:53.366 } 00:19:53.366 ] 00:19:53.366 }, 00:19:53.366 { 00:19:53.366 "subsystem": "nbd", 00:19:53.366 "config": [] 00:19:53.366 } 00:19:53.366 ] 00:19:53.366 }' 00:19:53.366 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1330303 00:19:53.366 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1330303 ']' 00:19:53.366 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1330303 00:19:53.366 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:53.366 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.366 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1330303 00:19:53.366 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:53.366 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:53.366 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1330303' 00:19:53.366 killing process with pid 1330303 00:19:53.366 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1330303 00:19:53.366 Received shutdown signal, test time was about 10.000000 seconds 00:19:53.366 00:19:53.366 Latency(us) 00:19:53.366 [2024-12-05T20:13:01.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.366 [2024-12-05T20:13:01.474Z] =================================================================================================================== 00:19:53.366 [2024-12-05T20:13:01.474Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:53.366 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1330303 00:19:53.624 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1329948 00:19:53.624 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1329948 ']' 00:19:53.624 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1329948 00:19:53.624 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:53.624 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.624 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1329948 00:19:53.624 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:53.624 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:53.624 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1329948' 00:19:53.624 killing process with pid 1329948 00:19:53.624 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1329948 00:19:53.624 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1329948 00:19:53.883 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:53.883 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:53.883 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.883 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:53.883 "subsystems": [ 00:19:53.883 { 00:19:53.883 "subsystem": "keyring", 00:19:53.883 "config": [ 00:19:53.883 { 00:19:53.883 "method": "keyring_file_add_key", 00:19:53.883 "params": { 00:19:53.883 "name": "key0", 00:19:53.883 "path": "/tmp/tmp.aY9AcHXBES" 00:19:53.883 } 00:19:53.883 } 00:19:53.883 ] 00:19:53.883 }, 00:19:53.883 { 00:19:53.883 "subsystem": "iobuf", 00:19:53.883 "config": [ 00:19:53.883 { 00:19:53.883 "method": "iobuf_set_options", 00:19:53.883 "params": { 00:19:53.883 "small_pool_count": 8192, 00:19:53.883 "large_pool_count": 1024, 00:19:53.883 "small_bufsize": 8192, 00:19:53.883 "large_bufsize": 135168, 00:19:53.883 "enable_numa": false 00:19:53.883 } 00:19:53.883 } 00:19:53.883 ] 00:19:53.883 }, 00:19:53.883 { 00:19:53.883 "subsystem": "sock", 00:19:53.883 "config": [ 00:19:53.883 { 00:19:53.883 "method": "sock_set_default_impl", 00:19:53.883 "params": { 00:19:53.883 "impl_name": "posix" 00:19:53.883 } 00:19:53.883 }, 00:19:53.883 { 00:19:53.883 "method": "sock_impl_set_options", 00:19:53.883 "params": { 00:19:53.883 "impl_name": "ssl", 00:19:53.883 "recv_buf_size": 4096, 00:19:53.883 "send_buf_size": 4096, 00:19:53.883 "enable_recv_pipe": true, 00:19:53.883 "enable_quickack": false, 00:19:53.883 "enable_placement_id": 0, 00:19:53.883 "enable_zerocopy_send_server": true, 00:19:53.883 "enable_zerocopy_send_client": false, 00:19:53.883 "zerocopy_threshold": 0, 00:19:53.883 "tls_version": 0, 00:19:53.883 "enable_ktls": false 00:19:53.883 } 00:19:53.883 }, 00:19:53.883 { 00:19:53.883 "method": "sock_impl_set_options", 00:19:53.883 "params": { 00:19:53.883 "impl_name": "posix", 00:19:53.883 "recv_buf_size": 2097152, 00:19:53.883 "send_buf_size": 2097152, 00:19:53.883 "enable_recv_pipe": true, 00:19:53.883 "enable_quickack": false, 00:19:53.883 "enable_placement_id": 0, 00:19:53.883 "enable_zerocopy_send_server": true, 00:19:53.883 "enable_zerocopy_send_client": false, 00:19:53.883 "zerocopy_threshold": 0, 00:19:53.883 "tls_version": 0, 00:19:53.883 "enable_ktls": false 00:19:53.883 } 00:19:53.883 } 00:19:53.883 ] 00:19:53.883 }, 00:19:53.883 { 00:19:53.883 "subsystem": "vmd", 00:19:53.883 "config": [] 00:19:53.883 }, 00:19:53.883 { 00:19:53.883 "subsystem": "accel", 00:19:53.883 "config": [ 00:19:53.883 { 00:19:53.883 "method": "accel_set_options", 00:19:53.883 "params": { 00:19:53.883 "small_cache_size": 128, 00:19:53.883 "large_cache_size": 16, 00:19:53.883 "task_count": 2048, 00:19:53.883 "sequence_count": 2048, 00:19:53.883 "buf_count": 2048 00:19:53.883 } 00:19:53.883 } 00:19:53.883 ] 00:19:53.883 }, 00:19:53.883 { 00:19:53.883 "subsystem": "bdev", 00:19:53.884 "config": [ 00:19:53.884 { 00:19:53.884 "method": "bdev_set_options", 00:19:53.884 "params": { 00:19:53.884 "bdev_io_pool_size": 65535, 00:19:53.884 "bdev_io_cache_size": 256, 00:19:53.884 "bdev_auto_examine": true, 00:19:53.884 "iobuf_small_cache_size": 128, 00:19:53.884 "iobuf_large_cache_size": 16 00:19:53.884 } 00:19:53.884 }, 00:19:53.884 { 00:19:53.884 "method": "bdev_raid_set_options", 00:19:53.884 "params": { 00:19:53.884 "process_window_size_kb": 1024, 00:19:53.884 "process_max_bandwidth_mb_sec": 0 00:19:53.884 } 00:19:53.884 }, 00:19:53.884 { 00:19:53.884 "method": "bdev_iscsi_set_options", 00:19:53.884 "params": { 00:19:53.884 "timeout_sec": 30 00:19:53.884 } 00:19:53.884 }, 00:19:53.884 { 00:19:53.884 "method": "bdev_nvme_set_options", 00:19:53.884 "params": { 00:19:53.884 "action_on_timeout": "none", 00:19:53.884 "timeout_us": 0, 00:19:53.884 "timeout_admin_us": 0, 00:19:53.884 "keep_alive_timeout_ms": 10000, 00:19:53.884 "arbitration_burst": 0, 00:19:53.884 "low_priority_weight": 0, 00:19:53.884 "medium_priority_weight": 0, 00:19:53.884 "high_priority_weight": 0, 00:19:53.884 "nvme_adminq_poll_period_us": 10000, 00:19:53.884 "nvme_ioq_poll_period_us": 0, 00:19:53.884 "io_queue_requests": 0, 00:19:53.884 "delay_cmd_submit": true, 00:19:53.884 "transport_retry_count": 4, 00:19:53.884 "bdev_retry_count": 3, 00:19:53.884 "transport_ack_timeout": 0, 00:19:53.884 "ctrlr_loss_timeout_sec": 0, 00:19:53.884 "reconnect_delay_sec": 0, 00:19:53.884 "fast_io_fail_timeout_sec": 0, 00:19:53.884 "disable_auto_failback": false, 00:19:53.884 "generate_uuids": false, 00:19:53.884 "transport_tos": 0, 00:19:53.884 "nvme_error_stat": false, 00:19:53.884 "rdma_srq_size": 0, 00:19:53.884 "io_path_stat": false, 00:19:53.884 "allow_accel_sequence": false, 00:19:53.884 "rdma_max_cq_size": 0, 00:19:53.884 "rdma_cm_event_timeout_ms": 0, 00:19:53.884 "dhchap_digests": [ 00:19:53.884 "sha256", 00:19:53.884 "sha384", 00:19:53.884 "sha512" 00:19:53.884 ], 00:19:53.884 "dhchap_dhgroups": [ 00:19:53.884 "null", 00:19:53.884 "ffdhe2048", 00:19:53.884 "ffdhe3072", 00:19:53.884 "ffdhe4096", 00:19:53.884 "ffdhe6144", 00:19:53.884 "ffdhe8192" 00:19:53.884 ] 00:19:53.884 } 00:19:53.884 }, 00:19:53.884 { 00:19:53.884 "method": "bdev_nvme_set_hotplug", 00:19:53.884 "params": { 00:19:53.884 "period_us": 100000, 00:19:53.884 "enable": false 00:19:53.884 } 00:19:53.884 }, 00:19:53.884 { 00:19:53.884 "method": "bdev_malloc_create", 00:19:53.884 "params": { 00:19:53.884 "name": "malloc0", 00:19:53.884 "num_blocks": 8192, 00:19:53.884 "block_size": 4096, 00:19:53.884 "physical_block_size": 4096, 00:19:53.884 "uuid": "ed6db019-626c-4087-ba8d-a484a22ffd6a", 00:19:53.884 "optimal_io_boundary": 0, 00:19:53.884 "md_size": 0, 00:19:53.884 "dif_type": 0, 00:19:53.884 "dif_is_head_of_md": false, 00:19:53.884 "dif_pi_format": 0 00:19:53.884 } 00:19:53.884 }, 00:19:53.884 { 00:19:53.884 "method": "bdev_wait_for_examine" 00:19:53.884 } 00:19:53.884 ] 00:19:53.884 }, 00:19:53.884 { 00:19:53.884 "subsystem": "nbd", 00:19:53.884 "config": [] 00:19:53.884 }, 00:19:53.884 { 00:19:53.884 "subsystem": "scheduler", 00:19:53.884 "config": [ 00:19:53.884 { 00:19:53.884 "method": "framework_set_scheduler", 00:19:53.884 "params": { 00:19:53.884 "name": "static" 00:19:53.884 } 00:19:53.884 } 00:19:53.884 ] 00:19:53.884 }, 00:19:53.884 { 00:19:53.884 "subsystem": "nvmf", 00:19:53.884 "config": [ 00:19:53.884 { 00:19:53.884 "method": "nvmf_set_config", 00:19:53.884 "params": { 00:19:53.884 "discovery_filter": "match_any", 00:19:53.884 "admin_cmd_passthru": { 00:19:53.884 "identify_ctrlr": false 00:19:53.884 }, 00:19:53.884 "dhchap_digests": [ 00:19:53.884 "sha256", 00:19:53.884 "sha384", 00:19:53.884 "sha512" 00:19:53.884 ], 00:19:53.884 "dhchap_dhgroups": [ 00:19:53.884 "null", 00:19:53.884 "ffdhe2048", 00:19:53.884 "ffdhe3072", 00:19:53.884 "ffdhe4096", 00:19:53.884 "ffdhe6144", 00:19:53.884 "ffdhe8192" 00:19:53.884 ] 00:19:53.884 } 00:19:53.884 }, 00:19:53.884 { 00:19:53.884 "method": "nvmf_set_max_subsystems", 00:19:53.884 "params": { 00:19:53.884 "max_subsystems": 1024 00:19:53.884 } 00:19:53.884 }, 00:19:53.884 { 00:19:53.884 "method": "nvmf_set_crdt", 00:19:53.884 "params": { 00:19:53.884 "crdt1": 0, 00:19:53.884 "crdt2": 0, 00:19:53.884 "crdt3": 0 00:19:53.884 } 00:19:53.884 }, 00:19:53.884 { 00:19:53.884 "method": "nvmf_create_transport", 00:19:53.884 "params": { 00:19:53.884 "trtype": "TCP", 00:19:53.884 "max_queue_depth": 128, 00:19:53.884 "max_io_qpairs_per_ctrlr": 127, 00:19:53.884 "in_capsule_data_size": 4096, 00:19:53.884 "max_io_size": 131072, 00:19:53.884 "io_unit_size": 131072, 00:19:53.884 "max_aq_depth": 128, 00:19:53.884 "num_shared_buffers": 511, 00:19:53.884 "buf_cache_size": 4294967295, 00:19:53.884 "dif_insert_or_strip": false, 00:19:53.884 "zcopy": false, 00:19:53.884 "c2h_success": false, 00:19:53.884 "sock_priority": 0, 00:19:53.884 "abort_timeout_sec": 1, 00:19:53.884 "ack_timeout": 0, 00:19:53.885 "data_wr_pool_size": 0 00:19:53.885 } 00:19:53.885 }, 00:19:53.885 { 00:19:53.885 "method": "nvmf_create_subsystem", 00:19:53.885 "params": { 00:19:53.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.885 "allow_any_host": false, 00:19:53.885 "serial_number": "SPDK00000000000001", 00:19:53.885 "model_number": "SPDK bdev Controller", 00:19:53.885 "max_namespaces": 10, 00:19:53.885 "min_cntlid": 1, 00:19:53.885 "max_cntlid": 65519, 00:19:53.885 "ana_reporting": false 00:19:53.885 } 00:19:53.885 }, 00:19:53.885 { 00:19:53.885 "method": "nvmf_subsystem_add_host", 00:19:53.885 "params": { 00:19:53.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.885 "host": "nqn.2016-06.io.spdk:host1", 00:19:53.885 "psk": "key0" 00:19:53.885 } 00:19:53.885 }, 00:19:53.885 { 00:19:53.885 "method": "nvmf_subsystem_add_ns", 00:19:53.885 "params": { 00:19:53.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.885 "namespace": { 00:19:53.885 "nsid": 1, 00:19:53.885 "bdev_name": "malloc0", 00:19:53.885 "nguid": "ED6DB019626C4087BA8DA484A22FFD6A", 00:19:53.885 "uuid": "ed6db019-626c-4087-ba8d-a484a22ffd6a", 00:19:53.885 "no_auto_visible": false 00:19:53.885 } 00:19:53.885 } 00:19:53.885 }, 00:19:53.885 { 00:19:53.885 "method": "nvmf_subsystem_add_listener", 00:19:53.885 "params": { 00:19:53.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.885 "listen_address": { 00:19:53.885 "trtype": "TCP", 00:19:53.885 "adrfam": "IPv4", 00:19:53.885 "traddr": "10.0.0.2", 00:19:53.885 "trsvcid": "4420" 00:19:53.885 }, 00:19:53.885 "secure_channel": true 00:19:53.885 } 00:19:53.885 } 00:19:53.885 ] 00:19:53.885 } 00:19:53.885 ] 00:19:53.885 }' 00:19:53.885 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.885 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1330589 00:19:53.885 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:53.885 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1330589 00:19:53.885 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1330589 ']' 00:19:53.885 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.885 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.885 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.885 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.885 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.885 [2024-12-05 21:13:01.791233] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:19:53.885 [2024-12-05 21:13:01.791278] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.885 [2024-12-05 21:13:01.869062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.885 [2024-12-05 21:13:01.909495] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.885 [2024-12-05 21:13:01.909531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.885 [2024-12-05 21:13:01.909539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.885 [2024-12-05 21:13:01.909545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.885 [2024-12-05 21:13:01.909550] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.885 [2024-12-05 21:13:01.910131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.144 [2024-12-05 21:13:02.122114] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.144 [2024-12-05 21:13:02.154140] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:54.144 [2024-12-05 21:13:02.154352] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.712 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.712 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:54.712 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:54.712 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:54.712 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.712 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.712 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1330678 00:19:54.712 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1330678 /var/tmp/bdevperf.sock 00:19:54.712 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1330678 ']' 00:19:54.712 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.712 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:54.712 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.712 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.712 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:54.712 "subsystems": [ 00:19:54.712 { 00:19:54.712 "subsystem": "keyring", 00:19:54.712 "config": [ 00:19:54.712 { 00:19:54.712 "method": "keyring_file_add_key", 00:19:54.712 "params": { 00:19:54.712 "name": "key0", 00:19:54.712 "path": "/tmp/tmp.aY9AcHXBES" 00:19:54.712 } 00:19:54.712 } 00:19:54.712 ] 00:19:54.712 }, 00:19:54.712 { 00:19:54.712 "subsystem": "iobuf", 00:19:54.712 "config": [ 00:19:54.712 { 00:19:54.712 "method": "iobuf_set_options", 00:19:54.712 "params": { 00:19:54.712 "small_pool_count": 8192, 00:19:54.712 "large_pool_count": 1024, 00:19:54.712 "small_bufsize": 8192, 00:19:54.712 "large_bufsize": 135168, 00:19:54.712 "enable_numa": false 00:19:54.712 } 00:19:54.712 } 00:19:54.712 ] 00:19:54.712 }, 00:19:54.712 { 00:19:54.712 "subsystem": "sock", 00:19:54.712 "config": [ 00:19:54.712 { 00:19:54.712 "method": "sock_set_default_impl", 00:19:54.712 "params": { 00:19:54.712 "impl_name": "posix" 00:19:54.712 } 00:19:54.712 }, 00:19:54.712 { 00:19:54.712 "method": "sock_impl_set_options", 00:19:54.712 "params": { 00:19:54.712 "impl_name": "ssl", 00:19:54.712 "recv_buf_size": 4096, 00:19:54.712 "send_buf_size": 4096, 00:19:54.712 "enable_recv_pipe": true, 00:19:54.712 "enable_quickack": false, 00:19:54.712 "enable_placement_id": 0, 00:19:54.712 "enable_zerocopy_send_server": true, 00:19:54.712 "enable_zerocopy_send_client": false, 00:19:54.712 "zerocopy_threshold": 0, 00:19:54.712 "tls_version": 0, 00:19:54.712 "enable_ktls": false 00:19:54.712 } 00:19:54.712 }, 00:19:54.712 { 00:19:54.712 "method": "sock_impl_set_options", 00:19:54.712 "params": { 00:19:54.712 "impl_name": "posix", 00:19:54.712 "recv_buf_size": 2097152, 00:19:54.712 "send_buf_size": 2097152, 00:19:54.712 "enable_recv_pipe": true, 00:19:54.712 "enable_quickack": false, 00:19:54.712 "enable_placement_id": 0, 00:19:54.712 "enable_zerocopy_send_server": true, 00:19:54.712 "enable_zerocopy_send_client": false, 00:19:54.712 "zerocopy_threshold": 0, 00:19:54.712 "tls_version": 0, 00:19:54.712 "enable_ktls": false 00:19:54.712 } 00:19:54.712 } 00:19:54.712 ] 00:19:54.712 }, 00:19:54.712 { 00:19:54.712 "subsystem": "vmd", 00:19:54.712 "config": [] 00:19:54.712 }, 00:19:54.712 { 00:19:54.712 "subsystem": "accel", 00:19:54.712 "config": [ 00:19:54.712 { 00:19:54.712 "method": "accel_set_options", 00:19:54.712 "params": { 00:19:54.712 "small_cache_size": 128, 00:19:54.712 "large_cache_size": 16, 00:19:54.712 "task_count": 2048, 00:19:54.712 "sequence_count": 2048, 00:19:54.712 "buf_count": 2048 00:19:54.712 } 00:19:54.712 } 00:19:54.712 ] 00:19:54.712 }, 00:19:54.712 { 00:19:54.712 "subsystem": "bdev", 00:19:54.712 "config": [ 00:19:54.712 { 00:19:54.712 "method": "bdev_set_options", 00:19:54.712 "params": { 00:19:54.712 "bdev_io_pool_size": 65535, 00:19:54.712 "bdev_io_cache_size": 256, 00:19:54.712 "bdev_auto_examine": true, 00:19:54.712 "iobuf_small_cache_size": 128, 00:19:54.712 "iobuf_large_cache_size": 16 00:19:54.712 } 00:19:54.712 }, 00:19:54.712 { 00:19:54.712 "method": "bdev_raid_set_options", 00:19:54.712 "params": { 00:19:54.712 "process_window_size_kb": 1024, 00:19:54.712 "process_max_bandwidth_mb_sec": 0 00:19:54.712 } 00:19:54.712 }, 00:19:54.712 { 00:19:54.712 "method": "bdev_iscsi_set_options", 00:19:54.712 "params": { 00:19:54.712 "timeout_sec": 30 00:19:54.712 } 00:19:54.712 }, 00:19:54.712 { 00:19:54.712 "method": "bdev_nvme_set_options", 00:19:54.712 "params": { 00:19:54.712 "action_on_timeout": "none", 00:19:54.712 "timeout_us": 0, 00:19:54.712 "timeout_admin_us": 0, 00:19:54.712 "keep_alive_timeout_ms": 10000, 00:19:54.712 "arbitration_burst": 0, 00:19:54.712 "low_priority_weight": 0, 00:19:54.712 "medium_priority_weight": 0, 00:19:54.712 "high_priority_weight": 0, 00:19:54.712 "nvme_adminq_poll_period_us": 10000, 00:19:54.712 "nvme_ioq_poll_period_us": 0, 00:19:54.712 "io_queue_requests": 512, 00:19:54.712 "delay_cmd_submit": true, 00:19:54.712 "transport_retry_count": 4, 00:19:54.712 "bdev_retry_count": 3, 00:19:54.712 "transport_ack_timeout": 0, 00:19:54.712 "ctrlr_loss_timeout_sec": 0, 00:19:54.712 "reconnect_delay_sec": 0, 00:19:54.712 "fast_io_fail_timeout_sec": 0, 00:19:54.712 "disable_auto_failback": false, 00:19:54.712 "generate_uuids": false, 00:19:54.712 "transport_tos": 0, 00:19:54.712 "nvme_error_stat": false, 00:19:54.712 "rdma_srq_size": 0, 00:19:54.712 "io_path_stat": false, 00:19:54.712 "allow_accel_sequence": false, 00:19:54.712 "rdma_max_cq_size": 0, 00:19:54.712 "rdma_cm_event_timeout_ms": 0, 00:19:54.712 "dhchap_digests": [ 00:19:54.712 "sha256", 00:19:54.712 "sha384", 00:19:54.712 "sha512" 00:19:54.712 ], 00:19:54.712 "dhchap_dhgroups": [ 00:19:54.712 "null", 00:19:54.712 "ffdhe2048", 00:19:54.712 "ffdhe3072", 00:19:54.712 "ffdhe4096", 00:19:54.712 "ffdhe6144", 00:19:54.712 "ffdhe8192" 00:19:54.712 ] 00:19:54.712 } 00:19:54.712 }, 00:19:54.712 { 00:19:54.712 "method": "bdev_nvme_attach_controller", 00:19:54.712 "params": { 00:19:54.712 "name": "TLSTEST", 00:19:54.712 "trtype": "TCP", 00:19:54.712 "adrfam": "IPv4", 00:19:54.713 "traddr": "10.0.0.2", 00:19:54.713 "trsvcid": "4420", 00:19:54.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.713 "prchk_reftag": false, 00:19:54.713 "prchk_guard": false, 00:19:54.713 "ctrlr_loss_timeout_sec": 0, 00:19:54.713 "reconnect_delay_sec": 0, 00:19:54.713 "fast_io_fail_timeout_sec": 0, 00:19:54.713 "psk": "key0", 00:19:54.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.713 "hdgst": false, 00:19:54.713 "ddgst": false, 00:19:54.713 "multipath": "multipath" 00:19:54.713 } 00:19:54.713 }, 00:19:54.713 { 00:19:54.713 "method": "bdev_nvme_set_hotplug", 00:19:54.713 "params": { 00:19:54.713 "period_us": 100000, 00:19:54.713 "enable": false 00:19:54.713 } 00:19:54.713 }, 00:19:54.713 { 00:19:54.713 "method": "bdev_wait_for_examine" 00:19:54.713 } 00:19:54.713 ] 00:19:54.713 }, 00:19:54.713 { 00:19:54.713 "subsystem": "nbd", 00:19:54.713 "config": [] 00:19:54.713 } 00:19:54.713 ] 00:19:54.713 }' 00:19:54.713 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.713 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.713 [2024-12-05 21:13:02.705077] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:19:54.713 [2024-12-05 21:13:02.705125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1330678 ] 00:19:54.713 [2024-12-05 21:13:02.778933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.971 [2024-12-05 21:13:02.821192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.971 [2024-12-05 21:13:02.974786] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.536 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.536 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:55.537 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:55.537 Running I/O for 10 seconds... 00:19:57.838 5135.00 IOPS, 20.06 MiB/s [2024-12-05T20:13:06.875Z] 5319.50 IOPS, 20.78 MiB/s [2024-12-05T20:13:07.807Z] 5361.33 IOPS, 20.94 MiB/s [2024-12-05T20:13:08.741Z] 5402.50 IOPS, 21.10 MiB/s [2024-12-05T20:13:09.675Z] 5390.00 IOPS, 21.05 MiB/s [2024-12-05T20:13:11.050Z] 5412.00 IOPS, 21.14 MiB/s [2024-12-05T20:13:11.984Z] 5405.71 IOPS, 21.12 MiB/s [2024-12-05T20:13:12.917Z] 5415.75 IOPS, 21.16 MiB/s [2024-12-05T20:13:13.851Z] 5431.44 IOPS, 21.22 MiB/s [2024-12-05T20:13:13.851Z] 5443.50 IOPS, 21.26 MiB/s 00:20:05.743 Latency(us) 00:20:05.743 [2024-12-05T20:13:13.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.743 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:05.743 Verification LBA range: start 0x0 length 0x2000 00:20:05.743 TLSTESTn1 : 10.01 5448.95 21.28 0.00 0.00 23457.04 5086.84 48933.55 00:20:05.743 [2024-12-05T20:13:13.851Z] =================================================================================================================== 00:20:05.743 [2024-12-05T20:13:13.851Z] Total : 5448.95 21.28 0.00 0.00 23457.04 5086.84 48933.55 00:20:05.743 { 00:20:05.743 "results": [ 00:20:05.743 { 00:20:05.743 "job": "TLSTESTn1", 00:20:05.743 "core_mask": "0x4", 00:20:05.743 "workload": "verify", 00:20:05.743 "status": "finished", 00:20:05.743 "verify_range": { 00:20:05.743 "start": 0, 00:20:05.743 "length": 8192 00:20:05.743 }, 00:20:05.743 "queue_depth": 128, 00:20:05.743 "io_size": 4096, 00:20:05.743 "runtime": 10.013126, 00:20:05.743 "iops": 5448.947711234234, 00:20:05.743 "mibps": 21.284951997008726, 00:20:05.743 "io_failed": 0, 00:20:05.743 "io_timeout": 0, 00:20:05.743 "avg_latency_us": 23457.039182182285, 00:20:05.743 "min_latency_us": 5086.8419047619045, 00:20:05.743 "max_latency_us": 48933.54666666667 00:20:05.743 } 00:20:05.743 ], 00:20:05.743 "core_count": 1 00:20:05.743 } 00:20:05.743 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:05.743 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1330678 00:20:05.743 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1330678 ']' 00:20:05.743 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1330678 00:20:05.743 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:05.743 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.743 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1330678 00:20:05.743 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:05.743 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:05.743 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1330678' 00:20:05.743 killing process with pid 1330678 00:20:05.743 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1330678 00:20:05.743 Received shutdown signal, test time was about 10.000000 seconds 00:20:05.743 00:20:05.743 Latency(us) 00:20:05.743 [2024-12-05T20:13:13.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.744 [2024-12-05T20:13:13.852Z] =================================================================================================================== 00:20:05.744 [2024-12-05T20:13:13.852Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.744 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1330678 00:20:06.033 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1330589 00:20:06.033 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1330589 ']' 00:20:06.033 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1330589 00:20:06.033 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:06.033 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.033 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1330589 00:20:06.033 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:06.033 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:06.033 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1330589' 00:20:06.033 killing process with pid 1330589 00:20:06.033 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1330589 00:20:06.033 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1330589 00:20:06.033 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:06.033 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:06.033 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:06.033 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.344 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1332589 00:20:06.344 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:06.344 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1332589 00:20:06.344 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1332589 ']' 00:20:06.344 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.344 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.344 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.344 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.344 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.344 [2024-12-05 21:13:14.195436] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:20:06.344 [2024-12-05 21:13:14.195486] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.344 [2024-12-05 21:13:14.272603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.344 [2024-12-05 21:13:14.312834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.344 [2024-12-05 21:13:14.312872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.344 [2024-12-05 21:13:14.312879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.344 [2024-12-05 21:13:14.312885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.344 [2024-12-05 21:13:14.312890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.344 [2024-12-05 21:13:14.313460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.973 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.973 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:06.973 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:06.973 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:06.973 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.973 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.973 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.aY9AcHXBES 00:20:06.973 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.aY9AcHXBES 00:20:06.973 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:07.231 [2024-12-05 21:13:15.220220] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.231 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:07.489 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:07.489 [2024-12-05 21:13:15.581141] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:07.489 [2024-12-05 21:13:15.581386] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.489 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:07.748 malloc0 00:20:07.748 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:08.006 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.aY9AcHXBES 00:20:08.274 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:08.274 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1332956 00:20:08.274 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:08.274 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:08.274 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1332956 /var/tmp/bdevperf.sock 00:20:08.274 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1332956 ']' 00:20:08.274 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.274 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.274 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.274 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.274 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.274 [2024-12-05 21:13:16.371025] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:20:08.274 [2024-12-05 21:13:16.371073] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332956 ] 00:20:08.532 [2024-12-05 21:13:16.443320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.532 [2024-12-05 21:13:16.484836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.532 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.532 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:08.532 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aY9AcHXBES 00:20:08.789 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:09.047 [2024-12-05 21:13:16.934110] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.047 nvme0n1 00:20:09.047 21:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:09.047 Running I/O for 1 seconds... 00:20:10.421 5744.00 IOPS, 22.44 MiB/s 00:20:10.421 Latency(us) 00:20:10.421 [2024-12-05T20:13:18.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.421 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:10.421 Verification LBA range: start 0x0 length 0x2000 00:20:10.421 nvme0n1 : 1.01 5789.77 22.62 0.00 0.00 21952.93 4868.39 23468.13 00:20:10.421 [2024-12-05T20:13:18.529Z] =================================================================================================================== 00:20:10.421 [2024-12-05T20:13:18.529Z] Total : 5789.77 22.62 0.00 0.00 21952.93 4868.39 23468.13 00:20:10.421 { 00:20:10.421 "results": [ 00:20:10.421 { 00:20:10.421 "job": "nvme0n1", 00:20:10.421 "core_mask": "0x2", 00:20:10.421 "workload": "verify", 00:20:10.421 "status": "finished", 00:20:10.421 "verify_range": { 00:20:10.421 "start": 0, 00:20:10.421 "length": 8192 00:20:10.421 }, 00:20:10.421 "queue_depth": 128, 00:20:10.421 "io_size": 4096, 00:20:10.421 "runtime": 1.014203, 00:20:10.421 "iops": 5789.767926144963, 00:20:10.421 "mibps": 22.61628096150376, 00:20:10.421 "io_failed": 0, 00:20:10.421 "io_timeout": 0, 00:20:10.421 "avg_latency_us": 21952.93371739977, 00:20:10.421 "min_latency_us": 4868.388571428572, 00:20:10.421 "max_latency_us": 23468.129523809523 00:20:10.421 } 00:20:10.421 ], 00:20:10.421 "core_count": 1 00:20:10.421 } 00:20:10.421 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1332956 00:20:10.421 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1332956 ']' 00:20:10.421 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1332956 00:20:10.421 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:10.421 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.421 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1332956 00:20:10.421 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:10.421 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:10.421 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1332956' 00:20:10.421 killing process with pid 1332956 00:20:10.421 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1332956 00:20:10.421 Received shutdown signal, test time was about 1.000000 seconds 00:20:10.421 00:20:10.421 Latency(us) 00:20:10.421 [2024-12-05T20:13:18.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.421 [2024-12-05T20:13:18.529Z] =================================================================================================================== 00:20:10.421 [2024-12-05T20:13:18.530Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.422 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1332956 00:20:10.422 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1332589 00:20:10.422 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1332589 ']' 00:20:10.422 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1332589 00:20:10.422 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:10.422 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.422 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1332589 00:20:10.422 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:10.422 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:10.422 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1332589' 00:20:10.422 killing process with pid 1332589 00:20:10.422 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1332589 00:20:10.422 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1332589 00:20:10.681 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:10.681 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:10.681 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:10.681 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.681 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1333424 00:20:10.681 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1333424 00:20:10.681 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:10.681 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1333424 ']' 00:20:10.681 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.681 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.681 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.681 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.681 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.681 [2024-12-05 21:13:18.635383] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:20:10.681 [2024-12-05 21:13:18.635432] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.681 [2024-12-05 21:13:18.712406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.681 [2024-12-05 21:13:18.752235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.681 [2024-12-05 21:13:18.752271] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.681 [2024-12-05 21:13:18.752278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.681 [2024-12-05 21:13:18.752284] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.681 [2024-12-05 21:13:18.752290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.681 [2024-12-05 21:13:18.752850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.940 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.940 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:10.940 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:10.940 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:10.940 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.940 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.940 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:10.940 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.940 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.940 [2024-12-05 21:13:18.884239] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.940 malloc0 00:20:10.940 [2024-12-05 21:13:18.912264] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:10.940 [2024-12-05 21:13:18.912496] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.940 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.941 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1333443 00:20:10.941 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:10.941 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1333443 /var/tmp/bdevperf.sock 00:20:10.941 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1333443 ']' 00:20:10.941 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.941 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.941 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.941 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.941 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.941 [2024-12-05 21:13:18.987162] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:20:10.941 [2024-12-05 21:13:18.987202] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333443 ] 00:20:11.200 [2024-12-05 21:13:19.061817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.200 [2024-12-05 21:13:19.103327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.200 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.200 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:11.200 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aY9AcHXBES 00:20:11.458 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:11.458 [2024-12-05 21:13:19.556385] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.716 nvme0n1 00:20:11.716 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:11.716 Running I/O for 1 seconds... 00:20:12.671 5264.00 IOPS, 20.56 MiB/s 00:20:12.671 Latency(us) 00:20:12.671 [2024-12-05T20:13:20.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.671 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:12.671 Verification LBA range: start 0x0 length 0x2000 00:20:12.671 nvme0n1 : 1.01 5322.72 20.79 0.00 0.00 23887.18 4525.10 40694.74 00:20:12.671 [2024-12-05T20:13:20.779Z] =================================================================================================================== 00:20:12.671 [2024-12-05T20:13:20.779Z] Total : 5322.72 20.79 0.00 0.00 23887.18 4525.10 40694.74 00:20:12.671 { 00:20:12.671 "results": [ 00:20:12.671 { 00:20:12.671 "job": "nvme0n1", 00:20:12.671 "core_mask": "0x2", 00:20:12.671 "workload": "verify", 00:20:12.671 "status": "finished", 00:20:12.671 "verify_range": { 00:20:12.671 "start": 0, 00:20:12.671 "length": 8192 00:20:12.671 }, 00:20:12.671 "queue_depth": 128, 00:20:12.671 "io_size": 4096, 00:20:12.671 "runtime": 1.013016, 00:20:12.671 "iops": 5322.719483206583, 00:20:12.671 "mibps": 20.791872981275716, 00:20:12.671 "io_failed": 0, 00:20:12.671 "io_timeout": 0, 00:20:12.671 "avg_latency_us": 23887.180941076727, 00:20:12.671 "min_latency_us": 4525.104761904762, 00:20:12.671 "max_latency_us": 40694.735238095236 00:20:12.671 } 00:20:12.671 ], 00:20:12.671 "core_count": 1 00:20:12.671 } 00:20:12.929 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:12.929 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.929 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.929 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.929 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:12.929 "subsystems": [ 00:20:12.929 { 00:20:12.929 "subsystem": "keyring", 00:20:12.929 "config": [ 00:20:12.929 { 00:20:12.929 "method": "keyring_file_add_key", 00:20:12.929 "params": { 00:20:12.929 "name": "key0", 00:20:12.929 "path": "/tmp/tmp.aY9AcHXBES" 00:20:12.929 } 00:20:12.929 } 00:20:12.929 ] 00:20:12.929 }, 00:20:12.929 { 00:20:12.929 "subsystem": "iobuf", 00:20:12.929 "config": [ 00:20:12.929 { 00:20:12.929 "method": "iobuf_set_options", 00:20:12.929 "params": { 00:20:12.929 "small_pool_count": 8192, 00:20:12.929 "large_pool_count": 1024, 00:20:12.929 "small_bufsize": 8192, 00:20:12.929 "large_bufsize": 135168, 00:20:12.929 "enable_numa": false 00:20:12.929 } 00:20:12.929 } 00:20:12.929 ] 00:20:12.929 }, 00:20:12.929 { 00:20:12.929 "subsystem": "sock", 00:20:12.929 "config": [ 00:20:12.929 { 00:20:12.929 "method": "sock_set_default_impl", 00:20:12.929 "params": { 00:20:12.929 "impl_name": "posix" 00:20:12.929 } 00:20:12.929 }, 00:20:12.929 { 00:20:12.929 "method": "sock_impl_set_options", 00:20:12.929 "params": { 00:20:12.929 "impl_name": "ssl", 00:20:12.929 "recv_buf_size": 4096, 00:20:12.929 "send_buf_size": 4096, 00:20:12.929 "enable_recv_pipe": true, 00:20:12.929 "enable_quickack": false, 00:20:12.929 "enable_placement_id": 0, 00:20:12.929 "enable_zerocopy_send_server": true, 00:20:12.929 "enable_zerocopy_send_client": false, 00:20:12.929 "zerocopy_threshold": 0, 00:20:12.929 "tls_version": 0, 00:20:12.929 "enable_ktls": false 00:20:12.929 } 00:20:12.929 }, 00:20:12.929 { 00:20:12.929 "method": "sock_impl_set_options", 00:20:12.929 "params": { 00:20:12.929 "impl_name": "posix", 00:20:12.930 "recv_buf_size": 2097152, 00:20:12.930 "send_buf_size": 2097152, 00:20:12.930 "enable_recv_pipe": true, 00:20:12.930 "enable_quickack": false, 00:20:12.930 "enable_placement_id": 0, 00:20:12.930 "enable_zerocopy_send_server": true, 00:20:12.930 "enable_zerocopy_send_client": false, 00:20:12.930 "zerocopy_threshold": 0, 00:20:12.930 "tls_version": 0, 00:20:12.930 "enable_ktls": false 00:20:12.930 } 00:20:12.930 } 00:20:12.930 ] 00:20:12.930 }, 00:20:12.930 { 00:20:12.930 "subsystem": "vmd", 00:20:12.930 "config": [] 00:20:12.930 }, 00:20:12.930 { 00:20:12.930 "subsystem": "accel", 00:20:12.930 "config": [ 00:20:12.930 { 00:20:12.930 "method": "accel_set_options", 00:20:12.930 "params": { 00:20:12.930 "small_cache_size": 128, 00:20:12.930 "large_cache_size": 16, 00:20:12.930 "task_count": 2048, 00:20:12.930 "sequence_count": 2048, 00:20:12.930 "buf_count": 2048 00:20:12.930 } 00:20:12.930 } 00:20:12.930 ] 00:20:12.930 }, 00:20:12.930 { 00:20:12.930 "subsystem": "bdev", 00:20:12.930 "config": [ 00:20:12.930 { 00:20:12.930 "method": "bdev_set_options", 00:20:12.930 "params": { 00:20:12.930 "bdev_io_pool_size": 65535, 00:20:12.930 "bdev_io_cache_size": 256, 00:20:12.930 "bdev_auto_examine": true, 00:20:12.930 "iobuf_small_cache_size": 128, 00:20:12.930 "iobuf_large_cache_size": 16 00:20:12.930 } 00:20:12.930 }, 00:20:12.930 { 00:20:12.930 "method": "bdev_raid_set_options", 00:20:12.930 "params": { 00:20:12.930 "process_window_size_kb": 1024, 00:20:12.930 "process_max_bandwidth_mb_sec": 0 00:20:12.930 } 00:20:12.930 }, 00:20:12.930 { 00:20:12.930 "method": "bdev_iscsi_set_options", 00:20:12.930 "params": { 00:20:12.930 "timeout_sec": 30 00:20:12.930 } 00:20:12.930 }, 00:20:12.930 { 00:20:12.930 "method": "bdev_nvme_set_options", 00:20:12.930 "params": { 00:20:12.930 "action_on_timeout": "none", 00:20:12.930 "timeout_us": 0, 00:20:12.930 "timeout_admin_us": 0, 00:20:12.930 "keep_alive_timeout_ms": 10000, 00:20:12.930 "arbitration_burst": 0, 00:20:12.930 "low_priority_weight": 0, 00:20:12.930 "medium_priority_weight": 0, 00:20:12.930 "high_priority_weight": 0, 00:20:12.930 "nvme_adminq_poll_period_us": 10000, 00:20:12.930 "nvme_ioq_poll_period_us": 0, 00:20:12.930 "io_queue_requests": 0, 00:20:12.930 "delay_cmd_submit": true, 00:20:12.930 "transport_retry_count": 4, 00:20:12.930 "bdev_retry_count": 3, 00:20:12.930 "transport_ack_timeout": 0, 00:20:12.930 "ctrlr_loss_timeout_sec": 0, 00:20:12.930 "reconnect_delay_sec": 0, 00:20:12.930 "fast_io_fail_timeout_sec": 0, 00:20:12.930 "disable_auto_failback": false, 00:20:12.930 "generate_uuids": false, 00:20:12.930 "transport_tos": 0, 00:20:12.930 "nvme_error_stat": false, 00:20:12.930 "rdma_srq_size": 0, 00:20:12.930 "io_path_stat": false, 00:20:12.930 "allow_accel_sequence": false, 00:20:12.930 "rdma_max_cq_size": 0, 00:20:12.930 "rdma_cm_event_timeout_ms": 0, 00:20:12.930 "dhchap_digests": [ 00:20:12.930 "sha256", 00:20:12.930 "sha384", 00:20:12.930 "sha512" 00:20:12.930 ], 00:20:12.930 "dhchap_dhgroups": [ 00:20:12.930 "null", 00:20:12.930 "ffdhe2048", 00:20:12.930 "ffdhe3072", 00:20:12.930 "ffdhe4096", 00:20:12.930 "ffdhe6144", 00:20:12.930 "ffdhe8192" 00:20:12.930 ] 00:20:12.930 } 00:20:12.930 }, 00:20:12.930 { 00:20:12.930 "method": "bdev_nvme_set_hotplug", 00:20:12.930 "params": { 00:20:12.930 "period_us": 100000, 00:20:12.930 "enable": false 00:20:12.930 } 00:20:12.930 }, 00:20:12.930 { 00:20:12.930 "method": "bdev_malloc_create", 00:20:12.930 "params": { 00:20:12.930 "name": "malloc0", 00:20:12.930 "num_blocks": 8192, 00:20:12.930 "block_size": 4096, 00:20:12.930 "physical_block_size": 4096, 00:20:12.930 "uuid": "fb442815-35e4-42a9-9b5a-4aa46798a39e", 00:20:12.930 "optimal_io_boundary": 0, 00:20:12.930 "md_size": 0, 00:20:12.930 "dif_type": 0, 00:20:12.930 "dif_is_head_of_md": false, 00:20:12.930 "dif_pi_format": 0 00:20:12.930 } 00:20:12.930 }, 00:20:12.930 { 00:20:12.930 "method": "bdev_wait_for_examine" 00:20:12.930 } 00:20:12.930 ] 00:20:12.930 }, 00:20:12.930 { 00:20:12.930 "subsystem": "nbd", 00:20:12.930 "config": [] 00:20:12.930 }, 00:20:12.930 { 00:20:12.930 "subsystem": "scheduler", 00:20:12.930 "config": [ 00:20:12.930 { 00:20:12.930 "method": "framework_set_scheduler", 00:20:12.930 "params": { 00:20:12.930 "name": "static" 00:20:12.930 } 00:20:12.930 } 00:20:12.930 ] 00:20:12.930 }, 00:20:12.930 { 00:20:12.930 "subsystem": "nvmf", 00:20:12.930 "config": [ 00:20:12.930 { 00:20:12.930 "method": "nvmf_set_config", 00:20:12.930 "params": { 00:20:12.930 "discovery_filter": "match_any", 00:20:12.930 "admin_cmd_passthru": { 00:20:12.930 "identify_ctrlr": false 00:20:12.930 }, 00:20:12.930 "dhchap_digests": [ 00:20:12.930 "sha256", 00:20:12.930 "sha384", 00:20:12.930 "sha512" 00:20:12.930 ], 00:20:12.930 "dhchap_dhgroups": [ 00:20:12.930 "null", 00:20:12.930 "ffdhe2048", 00:20:12.930 "ffdhe3072", 00:20:12.930 "ffdhe4096", 00:20:12.930 "ffdhe6144", 00:20:12.930 "ffdhe8192" 00:20:12.930 ] 00:20:12.930 } 00:20:12.930 }, 00:20:12.930 { 00:20:12.930 "method": "nvmf_set_max_subsystems", 00:20:12.930 "params": { 00:20:12.930 "max_subsystems": 1024 00:20:12.930 } 00:20:12.930 }, 00:20:12.930 { 00:20:12.930 "method": "nvmf_set_crdt", 00:20:12.930 "params": { 00:20:12.930 "crdt1": 0, 00:20:12.930 "crdt2": 0, 00:20:12.930 "crdt3": 0 00:20:12.930 } 00:20:12.930 }, 00:20:12.930 { 00:20:12.930 "method": "nvmf_create_transport", 00:20:12.930 "params": { 00:20:12.930 "trtype": "TCP", 00:20:12.930 "max_queue_depth": 128, 00:20:12.930 "max_io_qpairs_per_ctrlr": 127, 00:20:12.930 "in_capsule_data_size": 4096, 00:20:12.930 "max_io_size": 131072, 00:20:12.930 "io_unit_size": 131072, 00:20:12.930 "max_aq_depth": 128, 00:20:12.930 "num_shared_buffers": 511, 00:20:12.930 "buf_cache_size": 4294967295, 00:20:12.930 "dif_insert_or_strip": false, 00:20:12.930 "zcopy": false, 00:20:12.930 "c2h_success": false, 00:20:12.930 "sock_priority": 0, 00:20:12.930 "abort_timeout_sec": 1, 00:20:12.931 "ack_timeout": 0, 00:20:12.931 "data_wr_pool_size": 0 00:20:12.931 } 00:20:12.931 }, 00:20:12.931 { 00:20:12.931 "method": "nvmf_create_subsystem", 00:20:12.931 "params": { 00:20:12.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.931 "allow_any_host": false, 00:20:12.931 "serial_number": "00000000000000000000", 00:20:12.931 "model_number": "SPDK bdev Controller", 00:20:12.931 "max_namespaces": 32, 00:20:12.931 "min_cntlid": 1, 00:20:12.931 "max_cntlid": 65519, 00:20:12.931 "ana_reporting": false 00:20:12.931 } 00:20:12.931 }, 00:20:12.931 { 00:20:12.931 "method": "nvmf_subsystem_add_host", 00:20:12.931 "params": { 00:20:12.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.931 "host": "nqn.2016-06.io.spdk:host1", 00:20:12.931 "psk": "key0" 00:20:12.931 } 00:20:12.931 }, 00:20:12.931 { 00:20:12.931 "method": "nvmf_subsystem_add_ns", 00:20:12.931 "params": { 00:20:12.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.931 "namespace": { 00:20:12.931 "nsid": 1, 00:20:12.931 "bdev_name": "malloc0", 00:20:12.931 "nguid": "FB44281535E442A99B5A4AA46798A39E", 00:20:12.931 "uuid": "fb442815-35e4-42a9-9b5a-4aa46798a39e", 00:20:12.931 "no_auto_visible": false 00:20:12.931 } 00:20:12.931 } 00:20:12.931 }, 00:20:12.931 { 00:20:12.931 "method": "nvmf_subsystem_add_listener", 00:20:12.931 "params": { 00:20:12.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.931 "listen_address": { 00:20:12.931 "trtype": "TCP", 00:20:12.931 "adrfam": "IPv4", 00:20:12.931 "traddr": "10.0.0.2", 00:20:12.931 "trsvcid": "4420" 00:20:12.931 }, 00:20:12.931 "secure_channel": false, 00:20:12.931 "sock_impl": "ssl" 00:20:12.931 } 00:20:12.931 } 00:20:12.931 ] 00:20:12.931 } 00:20:12.931 ] 00:20:12.931 }' 00:20:12.931 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:13.190 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:13.190 "subsystems": [ 00:20:13.190 { 00:20:13.190 "subsystem": "keyring", 00:20:13.190 "config": [ 00:20:13.190 { 00:20:13.190 "method": "keyring_file_add_key", 00:20:13.190 "params": { 00:20:13.190 "name": "key0", 00:20:13.190 "path": "/tmp/tmp.aY9AcHXBES" 00:20:13.190 } 00:20:13.190 } 00:20:13.190 ] 00:20:13.190 }, 00:20:13.190 { 00:20:13.190 "subsystem": "iobuf", 00:20:13.190 "config": [ 00:20:13.190 { 00:20:13.190 "method": "iobuf_set_options", 00:20:13.190 "params": { 00:20:13.190 "small_pool_count": 8192, 00:20:13.190 "large_pool_count": 1024, 00:20:13.190 "small_bufsize": 8192, 00:20:13.190 "large_bufsize": 135168, 00:20:13.190 "enable_numa": false 00:20:13.190 } 00:20:13.190 } 00:20:13.190 ] 00:20:13.190 }, 00:20:13.190 { 00:20:13.190 "subsystem": "sock", 00:20:13.190 "config": [ 00:20:13.190 { 00:20:13.190 "method": "sock_set_default_impl", 00:20:13.190 "params": { 00:20:13.190 "impl_name": "posix" 00:20:13.190 } 00:20:13.190 }, 00:20:13.190 { 00:20:13.190 "method": "sock_impl_set_options", 00:20:13.190 "params": { 00:20:13.190 "impl_name": "ssl", 00:20:13.190 "recv_buf_size": 4096, 00:20:13.190 "send_buf_size": 4096, 00:20:13.190 "enable_recv_pipe": true, 00:20:13.190 "enable_quickack": false, 00:20:13.190 "enable_placement_id": 0, 00:20:13.190 "enable_zerocopy_send_server": true, 00:20:13.190 "enable_zerocopy_send_client": false, 00:20:13.190 "zerocopy_threshold": 0, 00:20:13.190 "tls_version": 0, 00:20:13.190 "enable_ktls": false 00:20:13.190 } 00:20:13.190 }, 00:20:13.190 { 00:20:13.190 "method": "sock_impl_set_options", 00:20:13.190 "params": { 00:20:13.190 "impl_name": "posix", 00:20:13.190 "recv_buf_size": 2097152, 00:20:13.190 "send_buf_size": 2097152, 00:20:13.190 "enable_recv_pipe": true, 00:20:13.190 "enable_quickack": false, 00:20:13.190 "enable_placement_id": 0, 00:20:13.190 "enable_zerocopy_send_server": true, 00:20:13.191 "enable_zerocopy_send_client": false, 00:20:13.191 "zerocopy_threshold": 0, 00:20:13.191 "tls_version": 0, 00:20:13.191 "enable_ktls": false 00:20:13.191 } 00:20:13.191 } 00:20:13.191 ] 00:20:13.191 }, 00:20:13.191 { 00:20:13.191 "subsystem": "vmd", 00:20:13.191 "config": [] 00:20:13.191 }, 00:20:13.191 { 00:20:13.191 "subsystem": "accel", 00:20:13.191 "config": [ 00:20:13.191 { 00:20:13.191 "method": "accel_set_options", 00:20:13.191 "params": { 00:20:13.191 "small_cache_size": 128, 00:20:13.191 "large_cache_size": 16, 00:20:13.191 "task_count": 2048, 00:20:13.191 "sequence_count": 2048, 00:20:13.191 "buf_count": 2048 00:20:13.191 } 00:20:13.191 } 00:20:13.191 ] 00:20:13.191 }, 00:20:13.191 { 00:20:13.191 "subsystem": "bdev", 00:20:13.191 "config": [ 00:20:13.191 { 00:20:13.191 "method": "bdev_set_options", 00:20:13.191 "params": { 00:20:13.191 "bdev_io_pool_size": 65535, 00:20:13.191 "bdev_io_cache_size": 256, 00:20:13.191 "bdev_auto_examine": true, 00:20:13.191 "iobuf_small_cache_size": 128, 00:20:13.191 "iobuf_large_cache_size": 16 00:20:13.191 } 00:20:13.191 }, 00:20:13.191 { 00:20:13.191 "method": "bdev_raid_set_options", 00:20:13.191 "params": { 00:20:13.191 "process_window_size_kb": 1024, 00:20:13.191 "process_max_bandwidth_mb_sec": 0 00:20:13.191 } 00:20:13.191 }, 00:20:13.191 { 00:20:13.191 "method": "bdev_iscsi_set_options", 00:20:13.191 "params": { 00:20:13.191 "timeout_sec": 30 00:20:13.191 } 00:20:13.191 }, 00:20:13.191 { 00:20:13.191 "method": "bdev_nvme_set_options", 00:20:13.191 "params": { 00:20:13.191 "action_on_timeout": "none", 00:20:13.191 "timeout_us": 0, 00:20:13.191 "timeout_admin_us": 0, 00:20:13.191 "keep_alive_timeout_ms": 10000, 00:20:13.191 "arbitration_burst": 0, 00:20:13.191 "low_priority_weight": 0, 00:20:13.191 "medium_priority_weight": 0, 00:20:13.191 "high_priority_weight": 0, 00:20:13.191 "nvme_adminq_poll_period_us": 10000, 00:20:13.191 "nvme_ioq_poll_period_us": 0, 00:20:13.191 "io_queue_requests": 512, 00:20:13.191 "delay_cmd_submit": true, 00:20:13.191 "transport_retry_count": 4, 00:20:13.191 "bdev_retry_count": 3, 00:20:13.191 "transport_ack_timeout": 0, 00:20:13.191 "ctrlr_loss_timeout_sec": 0, 00:20:13.191 "reconnect_delay_sec": 0, 00:20:13.191 "fast_io_fail_timeout_sec": 0, 00:20:13.191 "disable_auto_failback": false, 00:20:13.191 "generate_uuids": false, 00:20:13.191 "transport_tos": 0, 00:20:13.191 "nvme_error_stat": false, 00:20:13.191 "rdma_srq_size": 0, 00:20:13.191 "io_path_stat": false, 00:20:13.191 "allow_accel_sequence": false, 00:20:13.191 "rdma_max_cq_size": 0, 00:20:13.191 "rdma_cm_event_timeout_ms": 0, 00:20:13.191 "dhchap_digests": [ 00:20:13.191 "sha256", 00:20:13.191 "sha384", 00:20:13.191 "sha512" 00:20:13.191 ], 00:20:13.191 "dhchap_dhgroups": [ 00:20:13.191 "null", 00:20:13.191 "ffdhe2048", 00:20:13.191 "ffdhe3072", 00:20:13.191 "ffdhe4096", 00:20:13.191 "ffdhe6144", 00:20:13.191 "ffdhe8192" 00:20:13.191 ] 00:20:13.191 } 00:20:13.191 }, 00:20:13.191 { 00:20:13.191 "method": "bdev_nvme_attach_controller", 00:20:13.191 "params": { 00:20:13.191 "name": "nvme0", 00:20:13.191 "trtype": "TCP", 00:20:13.191 "adrfam": "IPv4", 00:20:13.191 "traddr": "10.0.0.2", 00:20:13.191 "trsvcid": "4420", 00:20:13.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.191 "prchk_reftag": false, 00:20:13.191 "prchk_guard": false, 00:20:13.191 "ctrlr_loss_timeout_sec": 0, 00:20:13.191 "reconnect_delay_sec": 0, 00:20:13.191 "fast_io_fail_timeout_sec": 0, 00:20:13.191 "psk": "key0", 00:20:13.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:13.191 "hdgst": false, 00:20:13.191 "ddgst": false, 00:20:13.191 "multipath": "multipath" 00:20:13.191 } 00:20:13.191 }, 00:20:13.191 { 00:20:13.191 "method": "bdev_nvme_set_hotplug", 00:20:13.191 "params": { 00:20:13.191 "period_us": 100000, 00:20:13.191 "enable": false 00:20:13.191 } 00:20:13.191 }, 00:20:13.191 { 00:20:13.191 "method": "bdev_enable_histogram", 00:20:13.191 "params": { 00:20:13.191 "name": "nvme0n1", 00:20:13.191 "enable": true 00:20:13.191 } 00:20:13.191 }, 00:20:13.191 { 00:20:13.191 "method": "bdev_wait_for_examine" 00:20:13.191 } 00:20:13.191 ] 00:20:13.191 }, 00:20:13.191 { 00:20:13.191 "subsystem": "nbd", 00:20:13.191 "config": [] 00:20:13.191 } 00:20:13.191 ] 00:20:13.191 }' 00:20:13.191 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1333443 00:20:13.191 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1333443 ']' 00:20:13.191 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1333443 00:20:13.191 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:13.191 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.191 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1333443 00:20:13.191 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:13.191 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:13.191 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1333443' 00:20:13.191 killing process with pid 1333443 00:20:13.191 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1333443 00:20:13.191 Received shutdown signal, test time was about 1.000000 seconds 00:20:13.191 00:20:13.191 Latency(us) 00:20:13.191 [2024-12-05T20:13:21.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.191 [2024-12-05T20:13:21.299Z] =================================================================================================================== 00:20:13.191 [2024-12-05T20:13:21.299Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:13.191 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1333443 00:20:13.450 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1333424 00:20:13.450 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1333424 ']' 00:20:13.450 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1333424 00:20:13.450 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:13.450 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.450 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1333424 00:20:13.450 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:13.450 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:13.450 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1333424' 00:20:13.450 killing process with pid 1333424 00:20:13.450 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1333424 00:20:13.450 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1333424 00:20:13.709 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:13.709 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.709 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.709 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:13.709 "subsystems": [ 00:20:13.709 { 00:20:13.709 "subsystem": "keyring", 00:20:13.709 "config": [ 00:20:13.709 { 00:20:13.709 "method": "keyring_file_add_key", 00:20:13.709 "params": { 00:20:13.709 "name": "key0", 00:20:13.709 "path": "/tmp/tmp.aY9AcHXBES" 00:20:13.709 } 00:20:13.709 } 00:20:13.709 ] 00:20:13.709 }, 00:20:13.709 { 00:20:13.709 "subsystem": "iobuf", 00:20:13.709 "config": [ 00:20:13.709 { 00:20:13.709 "method": "iobuf_set_options", 00:20:13.709 "params": { 00:20:13.709 "small_pool_count": 8192, 00:20:13.709 "large_pool_count": 1024, 00:20:13.709 "small_bufsize": 8192, 00:20:13.709 "large_bufsize": 135168, 00:20:13.709 "enable_numa": false 00:20:13.709 } 00:20:13.709 } 00:20:13.709 ] 00:20:13.709 }, 00:20:13.709 { 00:20:13.709 "subsystem": "sock", 00:20:13.709 "config": [ 00:20:13.709 { 00:20:13.709 "method": "sock_set_default_impl", 00:20:13.709 "params": { 00:20:13.709 "impl_name": "posix" 00:20:13.709 } 00:20:13.709 }, 00:20:13.709 { 00:20:13.709 "method": "sock_impl_set_options", 00:20:13.709 "params": { 00:20:13.709 "impl_name": "ssl", 00:20:13.709 "recv_buf_size": 4096, 00:20:13.709 "send_buf_size": 4096, 00:20:13.709 "enable_recv_pipe": true, 00:20:13.709 "enable_quickack": false, 00:20:13.709 "enable_placement_id": 0, 00:20:13.709 "enable_zerocopy_send_server": true, 00:20:13.709 "enable_zerocopy_send_client": false, 00:20:13.709 "zerocopy_threshold": 0, 00:20:13.709 "tls_version": 0, 00:20:13.709 "enable_ktls": false 00:20:13.709 } 00:20:13.709 }, 00:20:13.709 { 00:20:13.709 "method": "sock_impl_set_options", 00:20:13.709 "params": { 00:20:13.709 "impl_name": "posix", 00:20:13.709 "recv_buf_size": 2097152, 00:20:13.709 "send_buf_size": 2097152, 00:20:13.709 "enable_recv_pipe": true, 00:20:13.709 "enable_quickack": false, 00:20:13.709 "enable_placement_id": 0, 00:20:13.709 "enable_zerocopy_send_server": true, 00:20:13.709 "enable_zerocopy_send_client": false, 00:20:13.709 "zerocopy_threshold": 0, 00:20:13.709 "tls_version": 0, 00:20:13.709 "enable_ktls": false 00:20:13.709 } 00:20:13.709 } 00:20:13.709 ] 00:20:13.709 }, 00:20:13.709 { 00:20:13.709 "subsystem": "vmd", 00:20:13.710 "config": [] 00:20:13.710 }, 00:20:13.710 { 00:20:13.710 "subsystem": "accel", 00:20:13.710 "config": [ 00:20:13.710 { 00:20:13.710 "method": "accel_set_options", 00:20:13.710 "params": { 00:20:13.710 "small_cache_size": 128, 00:20:13.710 "large_cache_size": 16, 00:20:13.710 "task_count": 2048, 00:20:13.710 "sequence_count": 2048, 00:20:13.710 "buf_count": 2048 00:20:13.710 } 00:20:13.710 } 00:20:13.710 ] 00:20:13.710 }, 00:20:13.710 { 00:20:13.710 "subsystem": "bdev", 00:20:13.710 "config": [ 00:20:13.710 { 00:20:13.710 "method": "bdev_set_options", 00:20:13.710 "params": { 00:20:13.710 "bdev_io_pool_size": 65535, 00:20:13.710 "bdev_io_cache_size": 256, 00:20:13.710 "bdev_auto_examine": true, 00:20:13.710 "iobuf_small_cache_size": 128, 00:20:13.710 "iobuf_large_cache_size": 16 00:20:13.710 } 00:20:13.710 }, 00:20:13.710 { 00:20:13.710 "method": "bdev_raid_set_options", 00:20:13.710 "params": { 00:20:13.710 "process_window_size_kb": 1024, 00:20:13.710 "process_max_bandwidth_mb_sec": 0 00:20:13.710 } 00:20:13.710 }, 00:20:13.710 { 00:20:13.710 "method": "bdev_iscsi_set_options", 00:20:13.710 "params": { 00:20:13.710 "timeout_sec": 30 00:20:13.710 } 00:20:13.710 }, 00:20:13.710 { 00:20:13.710 "method": "bdev_nvme_set_options", 00:20:13.710 "params": { 00:20:13.710 "action_on_timeout": "none", 00:20:13.710 "timeout_us": 0, 00:20:13.710 "timeout_admin_us": 0, 00:20:13.710 "keep_alive_timeout_ms": 10000, 00:20:13.710 "arbitration_burst": 0, 00:20:13.710 "low_priority_weight": 0, 00:20:13.710 "medium_priority_weight": 0, 00:20:13.710 "high_priority_weight": 0, 00:20:13.710 "nvme_adminq_poll_period_us": 10000, 00:20:13.710 "nvme_ioq_poll_period_us": 0, 00:20:13.710 "io_queue_requests": 0, 00:20:13.710 "delay_cmd_submit": true, 00:20:13.710 "transport_retry_count": 4, 00:20:13.710 "bdev_retry_count": 3, 00:20:13.710 "transport_ack_timeout": 0, 00:20:13.710 "ctrlr_loss_timeout_sec": 0, 00:20:13.710 "reconnect_delay_sec": 0, 00:20:13.710 "fast_io_fail_timeout_sec": 0, 00:20:13.710 "disable_auto_failback": false, 00:20:13.710 "generate_uuids": false, 00:20:13.710 "transport_tos": 0, 00:20:13.710 "nvme_error_stat": false, 00:20:13.710 "rdma_srq_size": 0, 00:20:13.710 "io_path_stat": false, 00:20:13.710 "allow_accel_sequence": false, 00:20:13.710 "rdma_max_cq_size": 0, 00:20:13.710 "rdma_cm_event_timeout_ms": 0, 00:20:13.710 "dhchap_digests": [ 00:20:13.710 "sha256", 00:20:13.710 "sha384", 00:20:13.710 "sha512" 00:20:13.710 ], 00:20:13.710 "dhchap_dhgroups": [ 00:20:13.710 "null", 00:20:13.710 "ffdhe2048", 00:20:13.710 "ffdhe3072", 00:20:13.710 "ffdhe4096", 00:20:13.710 "ffdhe6144", 00:20:13.710 "ffdhe8192" 00:20:13.710 ] 00:20:13.710 } 00:20:13.710 }, 00:20:13.710 { 00:20:13.710 "method": "bdev_nvme_set_hotplug", 00:20:13.710 "params": { 00:20:13.710 "period_us": 100000, 00:20:13.710 "enable": false 00:20:13.710 } 00:20:13.710 }, 00:20:13.710 { 00:20:13.710 "method": "bdev_malloc_create", 00:20:13.710 "params": { 00:20:13.710 "name": "malloc0", 00:20:13.710 "num_blocks": 8192, 00:20:13.710 "block_size": 4096, 00:20:13.710 "physical_block_size": 4096, 00:20:13.710 "uuid": "fb442815-35e4-42a9-9b5a-4aa46798a39e", 00:20:13.710 "optimal_io_boundary": 0, 00:20:13.710 "md_size": 0, 00:20:13.710 "dif_type": 0, 00:20:13.710 "dif_is_head_of_md": false, 00:20:13.710 "dif_pi_format": 0 00:20:13.710 } 00:20:13.710 }, 00:20:13.710 { 00:20:13.710 "method": "bdev_wait_for_examine" 00:20:13.710 } 00:20:13.710 ] 00:20:13.710 }, 00:20:13.710 { 00:20:13.710 "subsystem": "nbd", 00:20:13.710 "config": [] 00:20:13.710 }, 00:20:13.710 { 00:20:13.710 "subsystem": "scheduler", 00:20:13.710 "config": [ 00:20:13.710 { 00:20:13.710 "method": "framework_set_scheduler", 00:20:13.710 "params": { 00:20:13.710 "name": "static" 00:20:13.710 } 00:20:13.710 } 00:20:13.710 ] 00:20:13.710 }, 00:20:13.710 { 00:20:13.710 "subsystem": "nvmf", 00:20:13.710 "config": [ 00:20:13.710 { 00:20:13.710 "method": "nvmf_set_config", 00:20:13.710 "params": { 00:20:13.710 "discovery_filter": "match_any", 00:20:13.710 "admin_cmd_passthru": { 00:20:13.710 "identify_ctrlr": false 00:20:13.710 }, 00:20:13.710 "dhchap_digests": [ 00:20:13.710 "sha256", 00:20:13.710 "sha384", 00:20:13.710 "sha512" 00:20:13.710 ], 00:20:13.710 "dhchap_dhgroups": [ 00:20:13.710 "null", 00:20:13.710 "ffdhe2048", 00:20:13.710 "ffdhe3072", 00:20:13.710 "ffdhe4096", 00:20:13.710 "ffdhe6144", 00:20:13.710 "ffdhe8192" 00:20:13.710 ] 00:20:13.710 } 00:20:13.710 }, 00:20:13.710 { 00:20:13.710 "method": "nvmf_set_max_subsystems", 00:20:13.710 "params": { 00:20:13.710 "max_subsystems": 1024 00:20:13.710 } 00:20:13.710 }, 00:20:13.710 { 00:20:13.710 "method": "nvmf_set_crdt", 00:20:13.710 "params": { 00:20:13.710 "crdt1": 0, 00:20:13.710 "crdt2": 0, 00:20:13.710 "crdt3": 0 00:20:13.710 } 00:20:13.710 }, 00:20:13.710 { 00:20:13.710 "method": "nvmf_create_transport", 00:20:13.710 "params": { 00:20:13.710 "trtype": "TCP", 00:20:13.710 "max_queue_depth": 128, 00:20:13.710 "max_io_qpairs_per_ctrlr": 127, 00:20:13.710 "in_capsule_data_size": 4096, 00:20:13.710 "max_io_size": 131072, 00:20:13.710 "io_unit_size": 131072, 00:20:13.710 "max_aq_depth": 128, 00:20:13.710 "num_shared_buffers": 511, 00:20:13.710 "buf_cache_size": 4294967295, 00:20:13.710 "dif_insert_or_strip": false, 00:20:13.710 "zcopy": false, 00:20:13.710 "c2h_success": false, 00:20:13.710 "sock_priority": 0, 00:20:13.710 "abort_timeout_sec": 1, 00:20:13.710 "ack_timeout": 0, 00:20:13.710 "data_wr_pool_size": 0 00:20:13.710 } 00:20:13.710 }, 00:20:13.710 { 00:20:13.710 "method": "nvmf_create_subsystem", 00:20:13.710 "params": { 00:20:13.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.710 "allow_any_host": false, 00:20:13.710 "serial_number": "00000000000000000000", 00:20:13.710 "model_number": "SPDK bdev Controller", 00:20:13.710 "max_namespaces": 32, 00:20:13.710 "min_cntlid": 1, 00:20:13.710 "max_cntlid": 65519, 00:20:13.710 "ana_reporting": false 00:20:13.710 } 00:20:13.710 }, 00:20:13.710 { 00:20:13.710 "method": "nvmf_subsystem_add_host", 00:20:13.710 "params": { 00:20:13.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.710 "host": "nqn.2016-06.io.spdk:host1", 00:20:13.710 "psk": "key0" 00:20:13.710 } 00:20:13.710 }, 00:20:13.710 { 00:20:13.710 "method": "nvmf_subsystem_add_ns", 00:20:13.710 "params": { 00:20:13.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.710 "namespace": { 00:20:13.710 "nsid": 1, 00:20:13.710 "bdev_name": "malloc0", 00:20:13.710 "nguid": "FB44281535E442A99B5A4AA46798A39E", 00:20:13.710 "uuid": "fb442815-35e4-42a9-9b5a-4aa46798a39e", 00:20:13.710 "no_auto_visible": false 00:20:13.710 } 00:20:13.710 } 00:20:13.710 }, 00:20:13.710 { 00:20:13.710 "method": "nvmf_subsystem_add_listener", 00:20:13.710 "params": { 00:20:13.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.710 "listen_address": { 00:20:13.710 "trtype": "TCP", 00:20:13.710 "adrfam": "IPv4", 00:20:13.710 "traddr": "10.0.0.2", 00:20:13.710 "trsvcid": "4420" 00:20:13.710 }, 00:20:13.710 "secure_channel": false, 00:20:13.710 "sock_impl": "ssl" 00:20:13.710 } 00:20:13.710 } 00:20:13.710 ] 00:20:13.710 } 00:20:13.710 ] 00:20:13.710 }' 00:20:13.710 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.710 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1333917 00:20:13.710 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:13.710 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1333917 00:20:13.710 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1333917 ']' 00:20:13.710 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.710 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.710 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.710 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.710 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.710 [2024-12-05 21:13:21.643156] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:20:13.710 [2024-12-05 21:13:21.643200] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.710 [2024-12-05 21:13:21.720115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.710 [2024-12-05 21:13:21.760350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.710 [2024-12-05 21:13:21.760390] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.711 [2024-12-05 21:13:21.760398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.711 [2024-12-05 21:13:21.760403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.711 [2024-12-05 21:13:21.760409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.711 [2024-12-05 21:13:21.760959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.970 [2024-12-05 21:13:21.974562] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.970 [2024-12-05 21:13:22.006596] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:13.970 [2024-12-05 21:13:22.006809] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.539 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.539 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:14.539 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:14.539 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:14.539 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.539 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.539 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1334028 00:20:14.539 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1334028 /var/tmp/bdevperf.sock 00:20:14.539 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1334028 ']' 00:20:14.539 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.539 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:14.539 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.539 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.539 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:14.539 "subsystems": [ 00:20:14.539 { 00:20:14.539 "subsystem": "keyring", 00:20:14.539 "config": [ 00:20:14.539 { 00:20:14.539 "method": "keyring_file_add_key", 00:20:14.539 "params": { 00:20:14.539 "name": "key0", 00:20:14.539 "path": "/tmp/tmp.aY9AcHXBES" 00:20:14.539 } 00:20:14.539 } 00:20:14.539 ] 00:20:14.539 }, 00:20:14.539 { 00:20:14.539 "subsystem": "iobuf", 00:20:14.539 "config": [ 00:20:14.539 { 00:20:14.539 "method": "iobuf_set_options", 00:20:14.539 "params": { 00:20:14.539 "small_pool_count": 8192, 00:20:14.539 "large_pool_count": 1024, 00:20:14.539 "small_bufsize": 8192, 00:20:14.539 "large_bufsize": 135168, 00:20:14.539 "enable_numa": false 00:20:14.539 } 00:20:14.539 } 00:20:14.539 ] 00:20:14.539 }, 00:20:14.539 { 00:20:14.539 "subsystem": "sock", 00:20:14.539 "config": [ 00:20:14.539 { 00:20:14.539 "method": "sock_set_default_impl", 00:20:14.539 "params": { 00:20:14.539 "impl_name": "posix" 00:20:14.539 } 00:20:14.539 }, 00:20:14.539 { 00:20:14.539 "method": "sock_impl_set_options", 00:20:14.539 "params": { 00:20:14.539 "impl_name": "ssl", 00:20:14.539 "recv_buf_size": 4096, 00:20:14.539 "send_buf_size": 4096, 00:20:14.539 "enable_recv_pipe": true, 00:20:14.539 "enable_quickack": false, 00:20:14.539 "enable_placement_id": 0, 00:20:14.539 "enable_zerocopy_send_server": true, 00:20:14.539 "enable_zerocopy_send_client": false, 00:20:14.539 "zerocopy_threshold": 0, 00:20:14.539 "tls_version": 0, 00:20:14.539 "enable_ktls": false 00:20:14.539 } 00:20:14.539 }, 00:20:14.539 { 00:20:14.539 "method": "sock_impl_set_options", 00:20:14.539 "params": { 00:20:14.540 "impl_name": "posix", 00:20:14.540 "recv_buf_size": 2097152, 00:20:14.540 "send_buf_size": 2097152, 00:20:14.540 "enable_recv_pipe": true, 00:20:14.540 "enable_quickack": false, 00:20:14.540 "enable_placement_id": 0, 00:20:14.540 "enable_zerocopy_send_server": true, 00:20:14.540 "enable_zerocopy_send_client": false, 00:20:14.540 "zerocopy_threshold": 0, 00:20:14.540 "tls_version": 0, 00:20:14.540 "enable_ktls": false 00:20:14.540 } 00:20:14.540 } 00:20:14.540 ] 00:20:14.540 }, 00:20:14.540 { 00:20:14.540 "subsystem": "vmd", 00:20:14.540 "config": [] 00:20:14.540 }, 00:20:14.540 { 00:20:14.540 "subsystem": "accel", 00:20:14.540 "config": [ 00:20:14.540 { 00:20:14.540 "method": "accel_set_options", 00:20:14.540 "params": { 00:20:14.540 "small_cache_size": 128, 00:20:14.540 "large_cache_size": 16, 00:20:14.540 "task_count": 2048, 00:20:14.540 "sequence_count": 2048, 00:20:14.540 "buf_count": 2048 00:20:14.540 } 00:20:14.540 } 00:20:14.540 ] 00:20:14.540 }, 00:20:14.540 { 00:20:14.540 "subsystem": "bdev", 00:20:14.540 "config": [ 00:20:14.540 { 00:20:14.540 "method": "bdev_set_options", 00:20:14.540 "params": { 00:20:14.540 "bdev_io_pool_size": 65535, 00:20:14.540 "bdev_io_cache_size": 256, 00:20:14.540 "bdev_auto_examine": true, 00:20:14.540 "iobuf_small_cache_size": 128, 00:20:14.540 "iobuf_large_cache_size": 16 00:20:14.540 } 00:20:14.540 }, 00:20:14.540 { 00:20:14.540 "method": "bdev_raid_set_options", 00:20:14.540 "params": { 00:20:14.540 "process_window_size_kb": 1024, 00:20:14.540 "process_max_bandwidth_mb_sec": 0 00:20:14.540 } 00:20:14.540 }, 00:20:14.540 { 00:20:14.540 "method": "bdev_iscsi_set_options", 00:20:14.540 "params": { 00:20:14.540 "timeout_sec": 30 00:20:14.540 } 00:20:14.540 }, 00:20:14.540 { 00:20:14.540 "method": "bdev_nvme_set_options", 00:20:14.540 "params": { 00:20:14.540 "action_on_timeout": "none", 00:20:14.540 "timeout_us": 0, 00:20:14.540 "timeout_admin_us": 0, 00:20:14.540 "keep_alive_timeout_ms": 10000, 00:20:14.540 "arbitration_burst": 0, 00:20:14.540 "low_priority_weight": 0, 00:20:14.540 "medium_priority_weight": 0, 00:20:14.540 "high_priority_weight": 0, 00:20:14.540 "nvme_adminq_poll_period_us": 10000, 00:20:14.540 "nvme_ioq_poll_period_us": 0, 00:20:14.540 "io_queue_requests": 512, 00:20:14.540 "delay_cmd_submit": true, 00:20:14.540 "transport_retry_count": 4, 00:20:14.540 "bdev_retry_count": 3, 00:20:14.540 "transport_ack_timeout": 0, 00:20:14.540 "ctrlr_loss_timeout_sec": 0, 00:20:14.540 "reconnect_delay_sec": 0, 00:20:14.540 "fast_io_fail_timeout_sec": 0, 00:20:14.540 "disable_auto_failback": false, 00:20:14.540 "generate_uuids": false, 00:20:14.540 "transport_tos": 0, 00:20:14.540 "nvme_error_stat": false, 00:20:14.540 "rdma_srq_size": 0, 00:20:14.540 "io_path_stat": false, 00:20:14.540 "allow_accel_sequence": false, 00:20:14.540 "rdma_max_cq_size": 0, 00:20:14.540 "rdma_cm_event_timeout_ms": 0, 00:20:14.540 "dhchap_digests": [ 00:20:14.540 "sha256", 00:20:14.540 "sha384", 00:20:14.540 "sha512" 00:20:14.540 ], 00:20:14.540 "dhchap_dhgroups": [ 00:20:14.540 "null", 00:20:14.540 "ffdhe2048", 00:20:14.540 "ffdhe3072", 00:20:14.540 "ffdhe4096", 00:20:14.540 "ffdhe6144", 00:20:14.540 "ffdhe8192" 00:20:14.540 ] 00:20:14.540 } 00:20:14.540 }, 00:20:14.540 { 00:20:14.540 "method": "bdev_nvme_attach_controller", 00:20:14.540 "params": { 00:20:14.540 "name": "nvme0", 00:20:14.540 "trtype": "TCP", 00:20:14.540 "adrfam": "IPv4", 00:20:14.540 "traddr": "10.0.0.2", 00:20:14.540 "trsvcid": "4420", 00:20:14.540 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.540 "prchk_reftag": false, 00:20:14.540 "prchk_guard": false, 00:20:14.540 "ctrlr_loss_timeout_sec": 0, 00:20:14.540 "reconnect_delay_sec": 0, 00:20:14.540 "fast_io_fail_timeout_sec": 0, 00:20:14.540 "psk": "key0", 00:20:14.540 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.540 "hdgst": false, 00:20:14.540 "ddgst": false, 00:20:14.540 "multipath": "multipath" 00:20:14.540 } 00:20:14.540 }, 00:20:14.540 { 00:20:14.540 "method": "bdev_nvme_set_hotplug", 00:20:14.540 "params": { 00:20:14.540 "period_us": 100000, 00:20:14.540 "enable": false 00:20:14.540 } 00:20:14.540 }, 00:20:14.540 { 00:20:14.540 "method": "bdev_enable_histogram", 00:20:14.540 "params": { 00:20:14.540 "name": "nvme0n1", 00:20:14.540 "enable": true 00:20:14.540 } 00:20:14.540 }, 00:20:14.540 { 00:20:14.540 "method": "bdev_wait_for_examine" 00:20:14.540 } 00:20:14.540 ] 00:20:14.540 }, 00:20:14.540 { 00:20:14.540 "subsystem": "nbd", 00:20:14.540 "config": [] 00:20:14.540 } 00:20:14.540 ] 00:20:14.540 }' 00:20:14.540 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.540 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.540 [2024-12-05 21:13:22.555301] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:20:14.540 [2024-12-05 21:13:22.555350] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334028 ] 00:20:14.540 [2024-12-05 21:13:22.629149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.799 [2024-12-05 21:13:22.671647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.799 [2024-12-05 21:13:22.825651] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.365 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.365 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:15.365 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:15.365 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:15.623 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.623 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:15.623 Running I/O for 1 seconds... 00:20:16.997 5294.00 IOPS, 20.68 MiB/s 00:20:16.997 Latency(us) 00:20:16.997 [2024-12-05T20:13:25.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.998 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:16.998 Verification LBA range: start 0x0 length 0x2000 00:20:16.998 nvme0n1 : 1.01 5355.08 20.92 0.00 0.00 23741.62 5492.54 22344.66 00:20:16.998 [2024-12-05T20:13:25.106Z] =================================================================================================================== 00:20:16.998 [2024-12-05T20:13:25.106Z] Total : 5355.08 20.92 0.00 0.00 23741.62 5492.54 22344.66 00:20:16.998 { 00:20:16.998 "results": [ 00:20:16.998 { 00:20:16.998 "job": "nvme0n1", 00:20:16.998 "core_mask": "0x2", 00:20:16.998 "workload": "verify", 00:20:16.998 "status": "finished", 00:20:16.998 "verify_range": { 00:20:16.998 "start": 0, 00:20:16.998 "length": 8192 00:20:16.998 }, 00:20:16.998 "queue_depth": 128, 00:20:16.998 "io_size": 4096, 00:20:16.998 "runtime": 1.012496, 00:20:16.998 "iops": 5355.082884278061, 00:20:16.998 "mibps": 20.918292516711176, 00:20:16.998 "io_failed": 0, 00:20:16.998 "io_timeout": 0, 00:20:16.998 "avg_latency_us": 23741.624993061778, 00:20:16.998 "min_latency_us": 5492.540952380952, 00:20:16.998 "max_latency_us": 22344.655238095238 00:20:16.998 } 00:20:16.998 ], 00:20:16.998 "core_count": 1 00:20:16.998 } 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:16.998 nvmf_trace.0 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1334028 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1334028 ']' 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1334028 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1334028 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1334028' 00:20:16.998 killing process with pid 1334028 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1334028 00:20:16.998 Received shutdown signal, test time was about 1.000000 seconds 00:20:16.998 00:20:16.998 Latency(us) 00:20:16.998 [2024-12-05T20:13:25.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.998 [2024-12-05T20:13:25.106Z] =================================================================================================================== 00:20:16.998 [2024-12-05T20:13:25.106Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:16.998 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1334028 00:20:16.998 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:16.998 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:16.998 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:16.998 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:16.998 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:16.998 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:16.998 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:16.998 rmmod nvme_tcp 00:20:16.998 rmmod nvme_fabrics 00:20:16.998 rmmod nvme_keyring 00:20:16.998 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:16.998 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:16.998 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:16.998 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1333917 ']' 00:20:16.998 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1333917 00:20:16.998 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1333917 ']' 00:20:16.998 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1333917 00:20:16.998 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:16.998 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:16.998 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1333917 00:20:17.269 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:17.269 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:17.269 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1333917' 00:20:17.269 killing process with pid 1333917 00:20:17.269 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1333917 00:20:17.269 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1333917 00:20:17.269 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:17.269 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:17.269 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:17.269 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:17.269 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:17.269 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:17.269 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:17.269 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:17.269 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:17.269 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.269 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.269 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.799 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.yMeXSarFbr /tmp/tmp.tTvXbKesto /tmp/tmp.aY9AcHXBES 00:20:19.800 00:20:19.800 real 1m19.727s 00:20:19.800 user 2m1.670s 00:20:19.800 sys 0m30.540s 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.800 ************************************ 00:20:19.800 END TEST nvmf_tls 00:20:19.800 ************************************ 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:19.800 ************************************ 00:20:19.800 START TEST nvmf_fips 00:20:19.800 ************************************ 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:19.800 * Looking for test storage... 00:20:19.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:19.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.800 --rc genhtml_branch_coverage=1 00:20:19.800 --rc genhtml_function_coverage=1 00:20:19.800 --rc genhtml_legend=1 00:20:19.800 --rc geninfo_all_blocks=1 00:20:19.800 --rc geninfo_unexecuted_blocks=1 00:20:19.800 00:20:19.800 ' 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:19.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.800 --rc genhtml_branch_coverage=1 00:20:19.800 --rc genhtml_function_coverage=1 00:20:19.800 --rc genhtml_legend=1 00:20:19.800 --rc geninfo_all_blocks=1 00:20:19.800 --rc geninfo_unexecuted_blocks=1 00:20:19.800 00:20:19.800 ' 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:19.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.800 --rc genhtml_branch_coverage=1 00:20:19.800 --rc genhtml_function_coverage=1 00:20:19.800 --rc genhtml_legend=1 00:20:19.800 --rc geninfo_all_blocks=1 00:20:19.800 --rc geninfo_unexecuted_blocks=1 00:20:19.800 00:20:19.800 ' 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:19.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.800 --rc genhtml_branch_coverage=1 00:20:19.800 --rc genhtml_function_coverage=1 00:20:19.800 --rc genhtml_legend=1 00:20:19.800 --rc geninfo_all_blocks=1 00:20:19.800 --rc geninfo_unexecuted_blocks=1 00:20:19.800 00:20:19.800 ' 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:19.800 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:19.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:19.801 Error setting digest 00:20:19.801 407257C9EF7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:19.801 407257C9EF7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:19.801 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:26.371 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:26.371 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:26.371 Found net devices under 0000:86:00.0: cvl_0_0 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:26.371 Found net devices under 0000:86:00.1: cvl_0_1 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.371 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:26.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:20:26.372 00:20:26.372 --- 10.0.0.2 ping statistics --- 00:20:26.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.372 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:20:26.372 00:20:26.372 --- 10.0.0.1 ping statistics --- 00:20:26.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.372 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1337974 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1337974 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1337974 ']' 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.372 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:26.372 [2024-12-05 21:13:33.842353] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:20:26.372 [2024-12-05 21:13:33.842406] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.372 [2024-12-05 21:13:33.923132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.372 [2024-12-05 21:13:33.960993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.372 [2024-12-05 21:13:33.961027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.372 [2024-12-05 21:13:33.961034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.372 [2024-12-05 21:13:33.961039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.372 [2024-12-05 21:13:33.961044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.372 [2024-12-05 21:13:33.961642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.630 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.630 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:26.630 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:26.630 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:26.630 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:26.630 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.630 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:26.630 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:26.630 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:26.630 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.VNC 00:20:26.630 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:26.630 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.VNC 00:20:26.630 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.VNC 00:20:26.630 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.VNC 00:20:26.630 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:26.888 [2024-12-05 21:13:34.875188] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.888 [2024-12-05 21:13:34.891197] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:26.888 [2024-12-05 21:13:34.891406] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.888 malloc0 00:20:26.888 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:26.888 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:26.888 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1338224 00:20:26.888 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1338224 /var/tmp/bdevperf.sock 00:20:26.888 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1338224 ']' 00:20:26.888 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.888 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.888 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.888 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.888 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:27.146 [2024-12-05 21:13:35.021885] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:20:27.146 [2024-12-05 21:13:35.021934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338224 ] 00:20:27.146 [2024-12-05 21:13:35.081637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.146 [2024-12-05 21:13:35.123555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.146 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.146 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:27.146 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.VNC 00:20:27.403 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:27.661 [2024-12-05 21:13:35.608235] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.661 TLSTESTn1 00:20:27.661 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:27.918 Running I/O for 10 seconds... 00:20:29.783 5365.00 IOPS, 20.96 MiB/s [2024-12-05T20:13:38.828Z] 5442.00 IOPS, 21.26 MiB/s [2024-12-05T20:13:40.199Z] 5472.33 IOPS, 21.38 MiB/s [2024-12-05T20:13:41.133Z] 5507.25 IOPS, 21.51 MiB/s [2024-12-05T20:13:42.067Z] 5520.40 IOPS, 21.56 MiB/s [2024-12-05T20:13:42.999Z] 5526.33 IOPS, 21.59 MiB/s [2024-12-05T20:13:43.949Z] 5520.29 IOPS, 21.56 MiB/s [2024-12-05T20:13:44.885Z] 5526.12 IOPS, 21.59 MiB/s [2024-12-05T20:13:46.259Z] 5528.67 IOPS, 21.60 MiB/s [2024-12-05T20:13:46.259Z] 5517.70 IOPS, 21.55 MiB/s 00:20:38.151 Latency(us) 00:20:38.151 [2024-12-05T20:13:46.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.151 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:38.151 Verification LBA range: start 0x0 length 0x2000 00:20:38.151 TLSTESTn1 : 10.02 5518.49 21.56 0.00 0.00 23155.11 7084.13 23093.64 00:20:38.151 [2024-12-05T20:13:46.259Z] =================================================================================================================== 00:20:38.151 [2024-12-05T20:13:46.259Z] Total : 5518.49 21.56 0.00 0.00 23155.11 7084.13 23093.64 00:20:38.151 { 00:20:38.151 "results": [ 00:20:38.151 { 00:20:38.151 "job": "TLSTESTn1", 00:20:38.151 "core_mask": "0x4", 00:20:38.151 "workload": "verify", 00:20:38.151 "status": "finished", 00:20:38.151 "verify_range": { 00:20:38.151 "start": 0, 00:20:38.151 "length": 8192 00:20:38.151 }, 00:20:38.151 "queue_depth": 128, 00:20:38.151 "io_size": 4096, 00:20:38.151 "runtime": 10.021212, 00:20:38.151 "iops": 5518.494170166244, 00:20:38.151 "mibps": 21.55661785221189, 00:20:38.151 "io_failed": 0, 00:20:38.151 "io_timeout": 0, 00:20:38.151 "avg_latency_us": 23155.114317057338, 00:20:38.151 "min_latency_us": 7084.129523809524, 00:20:38.151 "max_latency_us": 23093.638095238097 00:20:38.151 } 00:20:38.151 ], 00:20:38.151 "core_count": 1 00:20:38.151 } 00:20:38.151 21:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:38.151 21:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:38.151 21:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:38.151 21:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:38.151 21:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:38.151 21:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:38.151 21:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:38.151 21:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:38.151 21:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:38.151 21:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:38.151 nvmf_trace.0 00:20:38.151 21:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:38.151 21:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1338224 00:20:38.151 21:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1338224 ']' 00:20:38.151 21:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1338224 00:20:38.151 21:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:38.151 21:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.151 21:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1338224 00:20:38.151 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:38.151 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:38.151 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1338224' 00:20:38.151 killing process with pid 1338224 00:20:38.151 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1338224 00:20:38.151 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.151 00:20:38.152 Latency(us) 00:20:38.152 [2024-12-05T20:13:46.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.152 [2024-12-05T20:13:46.260Z] =================================================================================================================== 00:20:38.152 [2024-12-05T20:13:46.260Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:38.152 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1338224 00:20:38.152 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:38.152 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:38.152 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:38.152 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:38.152 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:38.152 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:38.152 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:38.152 rmmod nvme_tcp 00:20:38.152 rmmod nvme_fabrics 00:20:38.152 rmmod nvme_keyring 00:20:38.152 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:38.152 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:38.152 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:38.152 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1337974 ']' 00:20:38.152 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1337974 00:20:38.152 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1337974 ']' 00:20:38.152 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1337974 00:20:38.152 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:38.152 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.152 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1337974 00:20:38.410 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:38.410 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:38.410 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1337974' 00:20:38.410 killing process with pid 1337974 00:20:38.410 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1337974 00:20:38.410 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1337974 00:20:38.410 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:38.410 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:38.410 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:38.410 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:38.410 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:38.410 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:38.410 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:38.410 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:38.410 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:38.410 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.410 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.410 21:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.VNC 00:20:40.941 00:20:40.941 real 0m21.087s 00:20:40.941 user 0m22.125s 00:20:40.941 sys 0m9.621s 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:40.941 ************************************ 00:20:40.941 END TEST nvmf_fips 00:20:40.941 ************************************ 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:40.941 ************************************ 00:20:40.941 START TEST nvmf_control_msg_list 00:20:40.941 ************************************ 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:40.941 * Looking for test storage... 00:20:40.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:40.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.941 --rc genhtml_branch_coverage=1 00:20:40.941 --rc genhtml_function_coverage=1 00:20:40.941 --rc genhtml_legend=1 00:20:40.941 --rc geninfo_all_blocks=1 00:20:40.941 --rc geninfo_unexecuted_blocks=1 00:20:40.941 00:20:40.941 ' 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:40.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.941 --rc genhtml_branch_coverage=1 00:20:40.941 --rc genhtml_function_coverage=1 00:20:40.941 --rc genhtml_legend=1 00:20:40.941 --rc geninfo_all_blocks=1 00:20:40.941 --rc geninfo_unexecuted_blocks=1 00:20:40.941 00:20:40.941 ' 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:40.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.941 --rc genhtml_branch_coverage=1 00:20:40.941 --rc genhtml_function_coverage=1 00:20:40.941 --rc genhtml_legend=1 00:20:40.941 --rc geninfo_all_blocks=1 00:20:40.941 --rc geninfo_unexecuted_blocks=1 00:20:40.941 00:20:40.941 ' 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:40.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.941 --rc genhtml_branch_coverage=1 00:20:40.941 --rc genhtml_function_coverage=1 00:20:40.941 --rc genhtml_legend=1 00:20:40.941 --rc geninfo_all_blocks=1 00:20:40.941 --rc geninfo_unexecuted_blocks=1 00:20:40.941 00:20:40.941 ' 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.941 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:40.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:40.942 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:47.507 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:47.507 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.507 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:47.508 Found net devices under 0000:86:00.0: cvl_0_0 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:47.508 Found net devices under 0000:86:00.1: cvl_0_1 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:47.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:20:47.508 00:20:47.508 --- 10.0.0.2 ping statistics --- 00:20:47.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.508 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:20:47.508 00:20:47.508 --- 10.0.0.1 ping statistics --- 00:20:47.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.508 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1343580 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1343580 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1343580 ']' 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.508 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.508 [2024-12-05 21:13:54.790712] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:20:47.508 [2024-12-05 21:13:54.790762] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.508 [2024-12-05 21:13:54.871546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.508 [2024-12-05 21:13:54.912429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.508 [2024-12-05 21:13:54.912463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.508 [2024-12-05 21:13:54.912473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.508 [2024-12-05 21:13:54.912479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.508 [2024-12-05 21:13:54.912484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.508 [2024-12-05 21:13:54.913050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.508 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.508 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:47.508 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:47.508 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.508 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.508 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.508 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.509 [2024-12-05 21:13:55.049651] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.509 Malloc0 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.509 [2024-12-05 21:13:55.089948] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1343600 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1343601 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1343602 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:47.509 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1343600 00:20:47.509 [2024-12-05 21:13:55.168432] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:47.509 [2024-12-05 21:13:55.188521] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:47.509 [2024-12-05 21:13:55.188691] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:48.443 Initializing NVMe Controllers 00:20:48.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:48.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:48.443 Initialization complete. Launching workers. 00:20:48.443 ======================================================== 00:20:48.443 Latency(us) 00:20:48.443 Device Information : IOPS MiB/s Average min max 00:20:48.443 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6509.00 25.43 153.29 132.76 364.80 00:20:48.443 ======================================================== 00:20:48.443 Total : 6509.00 25.43 153.29 132.76 364.80 00:20:48.443 00:20:48.443 Initializing NVMe Controllers 00:20:48.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:48.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:48.443 Initialization complete. Launching workers. 00:20:48.443 ======================================================== 00:20:48.443 Latency(us) 00:20:48.443 Device Information : IOPS MiB/s Average min max 00:20:48.443 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6540.00 25.55 152.56 130.20 364.08 00:20:48.443 ======================================================== 00:20:48.443 Total : 6540.00 25.55 152.56 130.20 364.08 00:20:48.443 00:20:48.443 Initializing NVMe Controllers 00:20:48.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:48.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:48.443 Initialization complete. Launching workers. 00:20:48.443 ======================================================== 00:20:48.443 Latency(us) 00:20:48.443 Device Information : IOPS MiB/s Average min max 00:20:48.443 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40938.88 40780.36 41878.70 00:20:48.443 ======================================================== 00:20:48.443 Total : 25.00 0.10 40938.88 40780.36 41878.70 00:20:48.443 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1343601 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1343602 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:48.443 rmmod nvme_tcp 00:20:48.443 rmmod nvme_fabrics 00:20:48.443 rmmod nvme_keyring 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1343580 ']' 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1343580 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1343580 ']' 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1343580 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1343580 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1343580' 00:20:48.443 killing process with pid 1343580 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1343580 00:20:48.443 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1343580 00:20:48.702 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:48.703 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:48.703 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:48.703 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:48.703 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:48.703 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:48.703 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:48.703 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:48.703 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:48.703 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.703 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.703 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.607 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:50.607 00:20:50.607 real 0m10.116s 00:20:50.607 user 0m6.490s 00:20:50.607 sys 0m5.631s 00:20:50.607 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.607 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:50.607 ************************************ 00:20:50.607 END TEST nvmf_control_msg_list 00:20:50.607 ************************************ 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:50.866 ************************************ 00:20:50.866 START TEST nvmf_wait_for_buf 00:20:50.866 ************************************ 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:50.866 * Looking for test storage... 00:20:50.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:50.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.866 --rc genhtml_branch_coverage=1 00:20:50.866 --rc genhtml_function_coverage=1 00:20:50.866 --rc genhtml_legend=1 00:20:50.866 --rc geninfo_all_blocks=1 00:20:50.866 --rc geninfo_unexecuted_blocks=1 00:20:50.866 00:20:50.866 ' 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:50.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.866 --rc genhtml_branch_coverage=1 00:20:50.866 --rc genhtml_function_coverage=1 00:20:50.866 --rc genhtml_legend=1 00:20:50.866 --rc geninfo_all_blocks=1 00:20:50.866 --rc geninfo_unexecuted_blocks=1 00:20:50.866 00:20:50.866 ' 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:50.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.866 --rc genhtml_branch_coverage=1 00:20:50.866 --rc genhtml_function_coverage=1 00:20:50.866 --rc genhtml_legend=1 00:20:50.866 --rc geninfo_all_blocks=1 00:20:50.866 --rc geninfo_unexecuted_blocks=1 00:20:50.866 00:20:50.866 ' 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:50.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.866 --rc genhtml_branch_coverage=1 00:20:50.866 --rc genhtml_function_coverage=1 00:20:50.866 --rc genhtml_legend=1 00:20:50.866 --rc geninfo_all_blocks=1 00:20:50.866 --rc geninfo_unexecuted_blocks=1 00:20:50.866 00:20:50.866 ' 00:20:50.866 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:50.867 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:50.867 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.867 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.867 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.867 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.867 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.867 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.867 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.867 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.867 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.867 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.126 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:51.126 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:51.126 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.126 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.126 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:51.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.127 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.127 21:13:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:51.127 21:13:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:51.127 21:13:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:51.127 21:13:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:57.699 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:57.699 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:57.699 Found net devices under 0000:86:00.0: cvl_0_0 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:57.699 Found net devices under 0000:86:00.1: cvl_0_1 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:57.699 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:57.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:57.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:20:57.700 00:20:57.700 --- 10.0.0.2 ping statistics --- 00:20:57.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.700 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:57.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:57.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:20:57.700 00:20:57.700 --- 10.0.0.1 ping statistics --- 00:20:57.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.700 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1347360 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1347360 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1347360 ']' 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.700 21:14:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.700 [2024-12-05 21:14:05.035320] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:20:57.700 [2024-12-05 21:14:05.035365] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.700 [2024-12-05 21:14:05.115659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.700 [2024-12-05 21:14:05.157151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.700 [2024-12-05 21:14:05.157182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.700 [2024-12-05 21:14:05.157189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.700 [2024-12-05 21:14:05.157195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.700 [2024-12-05 21:14:05.157200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.700 [2024-12-05 21:14:05.157751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.700 Malloc0 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.700 [2024-12-05 21:14:05.318896] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.700 [2024-12-05 21:14:05.347086] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.700 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:57.700 [2024-12-05 21:14:05.426802] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:59.074 Initializing NVMe Controllers 00:20:59.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:59.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:59.074 Initialization complete. Launching workers. 00:20:59.074 ======================================================== 00:20:59.074 Latency(us) 00:20:59.074 Device Information : IOPS MiB/s Average min max 00:20:59.074 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 27.00 3.38 155798.37 7277.55 191536.91 00:20:59.074 ======================================================== 00:20:59.074 Total : 27.00 3.38 155798.37 7277.55 191536.91 00:20:59.074 00:20:59.074 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:59.074 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:59.074 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.074 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:59.074 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.074 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=406 00:20:59.074 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 406 -eq 0 ]] 00:20:59.074 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:59.074 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:59.074 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:59.074 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:59.074 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:59.074 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:59.074 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:59.075 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:59.075 rmmod nvme_tcp 00:20:59.075 rmmod nvme_fabrics 00:20:59.075 rmmod nvme_keyring 00:20:59.075 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:59.075 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:59.075 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:59.075 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1347360 ']' 00:20:59.075 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1347360 00:20:59.075 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1347360 ']' 00:20:59.075 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1347360 00:20:59.075 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:59.075 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.075 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1347360 00:20:59.075 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:59.075 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:59.075 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1347360' 00:20:59.075 killing process with pid 1347360 00:20:59.075 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1347360 00:20:59.075 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1347360 00:20:59.075 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:59.075 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:59.075 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:59.075 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:59.075 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:59.075 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:59.075 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:59.075 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:59.075 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:59.075 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.075 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.075 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.669 21:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:01.669 00:21:01.669 real 0m10.456s 00:21:01.669 user 0m3.959s 00:21:01.669 sys 0m4.919s 00:21:01.669 21:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:01.669 21:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:01.669 ************************************ 00:21:01.669 END TEST nvmf_wait_for_buf 00:21:01.669 ************************************ 00:21:01.669 21:14:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:01.669 21:14:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:01.669 21:14:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:01.669 21:14:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:01.669 21:14:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:01.669 21:14:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:07.034 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.034 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:07.034 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:07.034 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:07.034 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:07.034 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:07.034 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:07.034 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:07.034 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:07.034 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:07.035 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:07.035 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:07.035 Found net devices under 0000:86:00.0: cvl_0_0 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:07.035 Found net devices under 0000:86:00.1: cvl_0_1 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:07.035 ************************************ 00:21:07.035 START TEST nvmf_perf_adq 00:21:07.035 ************************************ 00:21:07.035 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:07.035 * Looking for test storage... 00:21:07.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:07.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.035 --rc genhtml_branch_coverage=1 00:21:07.035 --rc genhtml_function_coverage=1 00:21:07.035 --rc genhtml_legend=1 00:21:07.035 --rc geninfo_all_blocks=1 00:21:07.035 --rc geninfo_unexecuted_blocks=1 00:21:07.035 00:21:07.035 ' 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:07.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.035 --rc genhtml_branch_coverage=1 00:21:07.035 --rc genhtml_function_coverage=1 00:21:07.035 --rc genhtml_legend=1 00:21:07.035 --rc geninfo_all_blocks=1 00:21:07.035 --rc geninfo_unexecuted_blocks=1 00:21:07.035 00:21:07.035 ' 00:21:07.035 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:07.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.036 --rc genhtml_branch_coverage=1 00:21:07.036 --rc genhtml_function_coverage=1 00:21:07.036 --rc genhtml_legend=1 00:21:07.036 --rc geninfo_all_blocks=1 00:21:07.036 --rc geninfo_unexecuted_blocks=1 00:21:07.036 00:21:07.036 ' 00:21:07.036 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:07.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.036 --rc genhtml_branch_coverage=1 00:21:07.036 --rc genhtml_function_coverage=1 00:21:07.036 --rc genhtml_legend=1 00:21:07.036 --rc geninfo_all_blocks=1 00:21:07.036 --rc geninfo_unexecuted_blocks=1 00:21:07.036 00:21:07.036 ' 00:21:07.036 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.036 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:07.036 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.036 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.036 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.036 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.036 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.036 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.036 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.036 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.036 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.036 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:07.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:07.294 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:13.862 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:13.862 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:13.862 Found net devices under 0000:86:00.0: cvl_0_0 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.862 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.863 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.863 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.863 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.863 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.863 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.863 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:13.863 Found net devices under 0000:86:00.1: cvl_0_1 00:21:13.863 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.863 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:13.863 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.863 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:13.863 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:13.863 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:13.863 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:13.863 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:13.863 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:16.398 21:14:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.672 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:21.673 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.673 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.673 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.673 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.673 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:21.673 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:21.673 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:21.673 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:21.673 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:21.673 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:21.673 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:21.673 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:21.673 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:21.673 Found net devices under 0000:86:00.0: cvl_0_0 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:21.673 Found net devices under 0000:86:00.1: cvl_0_1 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:21.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:21:21.673 00:21:21.673 --- 10.0.0.2 ping statistics --- 00:21:21.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.673 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:21.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:21:21.673 00:21:21.673 --- 10.0.0.1 ping statistics --- 00:21:21.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.673 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1355729 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1355729 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1355729 ']' 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.673 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:21.673 [2024-12-05 21:14:29.378136] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:21:21.674 [2024-12-05 21:14:29.378183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.674 [2024-12-05 21:14:29.455502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:21.674 [2024-12-05 21:14:29.499050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.674 [2024-12-05 21:14:29.499086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.674 [2024-12-05 21:14:29.499095] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.674 [2024-12-05 21:14:29.499102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.674 [2024-12-05 21:14:29.499109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.674 [2024-12-05 21:14:29.500601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.674 [2024-12-05 21:14:29.500710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.674 [2024-12-05 21:14:29.500825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:21.674 [2024-12-05 21:14:29.500835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.240 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.241 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.500 [2024-12-05 21:14:30.393632] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.500 Malloc1 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.500 [2024-12-05 21:14:30.454115] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1355977 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:22.500 21:14:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:24.402 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:24.402 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.403 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.403 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.403 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:24.403 "tick_rate": 2100000000, 00:21:24.403 "poll_groups": [ 00:21:24.403 { 00:21:24.403 "name": "nvmf_tgt_poll_group_000", 00:21:24.403 "admin_qpairs": 1, 00:21:24.403 "io_qpairs": 1, 00:21:24.403 "current_admin_qpairs": 1, 00:21:24.403 "current_io_qpairs": 1, 00:21:24.403 "pending_bdev_io": 0, 00:21:24.403 "completed_nvme_io": 19359, 00:21:24.403 "transports": [ 00:21:24.403 { 00:21:24.403 "trtype": "TCP" 00:21:24.403 } 00:21:24.403 ] 00:21:24.403 }, 00:21:24.403 { 00:21:24.403 "name": "nvmf_tgt_poll_group_001", 00:21:24.403 "admin_qpairs": 0, 00:21:24.403 "io_qpairs": 1, 00:21:24.403 "current_admin_qpairs": 0, 00:21:24.403 "current_io_qpairs": 1, 00:21:24.403 "pending_bdev_io": 0, 00:21:24.403 "completed_nvme_io": 19571, 00:21:24.403 "transports": [ 00:21:24.403 { 00:21:24.403 "trtype": "TCP" 00:21:24.403 } 00:21:24.403 ] 00:21:24.403 }, 00:21:24.403 { 00:21:24.403 "name": "nvmf_tgt_poll_group_002", 00:21:24.403 "admin_qpairs": 0, 00:21:24.403 "io_qpairs": 1, 00:21:24.403 "current_admin_qpairs": 0, 00:21:24.403 "current_io_qpairs": 1, 00:21:24.403 "pending_bdev_io": 0, 00:21:24.403 "completed_nvme_io": 19802, 00:21:24.403 "transports": [ 00:21:24.403 { 00:21:24.403 "trtype": "TCP" 00:21:24.403 } 00:21:24.403 ] 00:21:24.403 }, 00:21:24.403 { 00:21:24.403 "name": "nvmf_tgt_poll_group_003", 00:21:24.403 "admin_qpairs": 0, 00:21:24.403 "io_qpairs": 1, 00:21:24.403 "current_admin_qpairs": 0, 00:21:24.403 "current_io_qpairs": 1, 00:21:24.403 "pending_bdev_io": 0, 00:21:24.403 "completed_nvme_io": 19287, 00:21:24.403 "transports": [ 00:21:24.403 { 00:21:24.403 "trtype": "TCP" 00:21:24.403 } 00:21:24.403 ] 00:21:24.403 } 00:21:24.403 ] 00:21:24.403 }' 00:21:24.403 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:24.403 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:24.661 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:24.661 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:24.661 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1355977 00:21:32.766 Initializing NVMe Controllers 00:21:32.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:32.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:32.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:32.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:32.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:32.766 Initialization complete. Launching workers. 00:21:32.766 ======================================================== 00:21:32.766 Latency(us) 00:21:32.766 Device Information : IOPS MiB/s Average min max 00:21:32.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10441.90 40.79 6128.77 2445.03 10222.66 00:21:32.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10512.70 41.07 6087.73 2114.53 10430.17 00:21:32.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10446.20 40.81 6126.90 2374.39 10595.79 00:21:32.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10341.70 40.40 6189.63 2350.06 10604.23 00:21:32.766 ======================================================== 00:21:32.766 Total : 41742.50 163.06 6133.04 2114.53 10604.23 00:21:32.766 00:21:32.766 [2024-12-05 21:14:40.621177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d29dc0 is same with the state(6) to be set 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:32.766 rmmod nvme_tcp 00:21:32.766 rmmod nvme_fabrics 00:21:32.766 rmmod nvme_keyring 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1355729 ']' 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1355729 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1355729 ']' 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1355729 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1355729 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1355729' 00:21:32.766 killing process with pid 1355729 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1355729 00:21:32.766 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1355729 00:21:33.025 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:33.025 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:33.025 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:33.025 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:33.025 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:33.025 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:33.025 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:33.025 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:33.025 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:33.025 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.025 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.025 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.930 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:34.930 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:34.930 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:34.930 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:36.310 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:38.214 21:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:43.492 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:43.492 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:43.492 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:43.493 Found net devices under 0000:86:00.0: cvl_0_0 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:43.493 Found net devices under 0000:86:00.1: cvl_0_1 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:43.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:21:43.493 00:21:43.493 --- 10.0.0.2 ping statistics --- 00:21:43.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.493 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:43.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:21:43.493 00:21:43.493 --- 10.0.0.1 ping statistics --- 00:21:43.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.493 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:43.493 net.core.busy_poll = 1 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:43.493 net.core.busy_read = 1 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:43.493 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:43.753 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:43.753 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:43.753 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:43.753 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:43.753 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:43.753 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.753 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1359754 00:21:43.753 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1359754 00:21:43.753 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:43.753 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1359754 ']' 00:21:43.753 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.753 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.753 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.753 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.753 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.753 [2024-12-05 21:14:51.724007] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:21:43.753 [2024-12-05 21:14:51.724056] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.753 [2024-12-05 21:14:51.786092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:43.753 [2024-12-05 21:14:51.830076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.753 [2024-12-05 21:14:51.830111] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.753 [2024-12-05 21:14:51.830118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.753 [2024-12-05 21:14:51.830124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.753 [2024-12-05 21:14:51.830129] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.753 [2024-12-05 21:14:51.834386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.753 [2024-12-05 21:14:51.834432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.753 [2024-12-05 21:14:51.834538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.753 [2024-12-05 21:14:51.834539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:44.011 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.011 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:44.011 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:44.011 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:44.011 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.012 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.012 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:44.012 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:44.012 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:44.012 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.012 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.012 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.012 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:44.012 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:44.012 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.012 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.012 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.012 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:44.012 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.012 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.012 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.012 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:44.012 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.012 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.012 [2024-12-05 21:14:52.073161] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.012 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.012 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:44.012 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.012 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.012 Malloc1 00:21:44.012 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.012 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:44.012 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.012 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.270 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.270 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:44.270 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.270 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.270 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.270 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:44.270 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.270 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.270 [2024-12-05 21:14:52.135231] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.270 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.270 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1359785 00:21:44.270 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:44.270 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:46.174 21:14:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:46.174 21:14:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.174 21:14:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.174 21:14:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.174 21:14:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:46.174 "tick_rate": 2100000000, 00:21:46.174 "poll_groups": [ 00:21:46.174 { 00:21:46.175 "name": "nvmf_tgt_poll_group_000", 00:21:46.175 "admin_qpairs": 1, 00:21:46.175 "io_qpairs": 1, 00:21:46.175 "current_admin_qpairs": 1, 00:21:46.175 "current_io_qpairs": 1, 00:21:46.175 "pending_bdev_io": 0, 00:21:46.175 "completed_nvme_io": 24616, 00:21:46.175 "transports": [ 00:21:46.175 { 00:21:46.175 "trtype": "TCP" 00:21:46.175 } 00:21:46.175 ] 00:21:46.175 }, 00:21:46.175 { 00:21:46.175 "name": "nvmf_tgt_poll_group_001", 00:21:46.175 "admin_qpairs": 0, 00:21:46.175 "io_qpairs": 3, 00:21:46.175 "current_admin_qpairs": 0, 00:21:46.175 "current_io_qpairs": 3, 00:21:46.175 "pending_bdev_io": 0, 00:21:46.175 "completed_nvme_io": 29717, 00:21:46.175 "transports": [ 00:21:46.175 { 00:21:46.175 "trtype": "TCP" 00:21:46.175 } 00:21:46.175 ] 00:21:46.175 }, 00:21:46.175 { 00:21:46.175 "name": "nvmf_tgt_poll_group_002", 00:21:46.175 "admin_qpairs": 0, 00:21:46.175 "io_qpairs": 0, 00:21:46.175 "current_admin_qpairs": 0, 00:21:46.175 "current_io_qpairs": 0, 00:21:46.175 "pending_bdev_io": 0, 00:21:46.175 "completed_nvme_io": 0, 00:21:46.175 "transports": [ 00:21:46.175 { 00:21:46.175 "trtype": "TCP" 00:21:46.175 } 00:21:46.175 ] 00:21:46.175 }, 00:21:46.175 { 00:21:46.175 "name": "nvmf_tgt_poll_group_003", 00:21:46.175 "admin_qpairs": 0, 00:21:46.175 "io_qpairs": 0, 00:21:46.175 "current_admin_qpairs": 0, 00:21:46.175 "current_io_qpairs": 0, 00:21:46.175 "pending_bdev_io": 0, 00:21:46.175 "completed_nvme_io": 0, 00:21:46.175 "transports": [ 00:21:46.175 { 00:21:46.175 "trtype": "TCP" 00:21:46.175 } 00:21:46.175 ] 00:21:46.175 } 00:21:46.175 ] 00:21:46.175 }' 00:21:46.175 21:14:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:46.175 21:14:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:46.175 21:14:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:46.175 21:14:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:46.175 21:14:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1359785 00:21:54.287 Initializing NVMe Controllers 00:21:54.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:54.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:54.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:54.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:54.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:54.287 Initialization complete. Launching workers. 00:21:54.287 ======================================================== 00:21:54.287 Latency(us) 00:21:54.287 Device Information : IOPS MiB/s Average min max 00:21:54.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5150.20 20.12 12430.74 1565.77 59603.53 00:21:54.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15160.79 59.22 4220.84 1710.15 45683.58 00:21:54.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5145.50 20.10 12441.85 1666.39 60457.27 00:21:54.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5018.50 19.60 12760.15 1482.42 59496.67 00:21:54.287 ======================================================== 00:21:54.287 Total : 30474.99 119.04 8402.58 1482.42 60457.27 00:21:54.287 00:21:54.287 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:54.287 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:54.287 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:54.287 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:54.287 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:54.287 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:54.287 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:54.287 rmmod nvme_tcp 00:21:54.287 rmmod nvme_fabrics 00:21:54.287 rmmod nvme_keyring 00:21:54.545 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:54.545 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:54.545 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:54.545 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1359754 ']' 00:21:54.545 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1359754 00:21:54.545 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1359754 ']' 00:21:54.545 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1359754 00:21:54.545 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:54.545 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.545 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1359754 00:21:54.545 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:54.545 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:54.545 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1359754' 00:21:54.545 killing process with pid 1359754 00:21:54.545 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1359754 00:21:54.545 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1359754 00:21:54.805 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:54.805 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:54.805 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:54.805 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:54.805 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:54.805 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:54.805 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:54.805 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:54.805 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:54.805 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.805 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.805 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:58.091 00:21:58.091 real 0m50.798s 00:21:58.091 user 2m47.043s 00:21:58.091 sys 0m10.430s 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.091 ************************************ 00:21:58.091 END TEST nvmf_perf_adq 00:21:58.091 ************************************ 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:58.091 ************************************ 00:21:58.091 START TEST nvmf_shutdown 00:21:58.091 ************************************ 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:58.091 * Looking for test storage... 00:21:58.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:58.091 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:58.091 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:58.091 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:58.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.091 --rc genhtml_branch_coverage=1 00:21:58.091 --rc genhtml_function_coverage=1 00:21:58.091 --rc genhtml_legend=1 00:21:58.091 --rc geninfo_all_blocks=1 00:21:58.091 --rc geninfo_unexecuted_blocks=1 00:21:58.091 00:21:58.091 ' 00:21:58.091 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:58.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.091 --rc genhtml_branch_coverage=1 00:21:58.091 --rc genhtml_function_coverage=1 00:21:58.091 --rc genhtml_legend=1 00:21:58.091 --rc geninfo_all_blocks=1 00:21:58.091 --rc geninfo_unexecuted_blocks=1 00:21:58.091 00:21:58.091 ' 00:21:58.091 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:58.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.091 --rc genhtml_branch_coverage=1 00:21:58.091 --rc genhtml_function_coverage=1 00:21:58.091 --rc genhtml_legend=1 00:21:58.091 --rc geninfo_all_blocks=1 00:21:58.091 --rc geninfo_unexecuted_blocks=1 00:21:58.091 00:21:58.091 ' 00:21:58.091 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:58.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.091 --rc genhtml_branch_coverage=1 00:21:58.091 --rc genhtml_function_coverage=1 00:21:58.091 --rc genhtml_legend=1 00:21:58.091 --rc geninfo_all_blocks=1 00:21:58.091 --rc geninfo_unexecuted_blocks=1 00:21:58.091 00:21:58.091 ' 00:21:58.091 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:58.091 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:58.091 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.091 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.091 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.091 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.091 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.091 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.091 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.091 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.091 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.091 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:58.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:58.092 ************************************ 00:21:58.092 START TEST nvmf_shutdown_tc1 00:21:58.092 ************************************ 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:58.092 21:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:04.669 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:04.669 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:04.669 Found net devices under 0000:86:00.0: cvl_0_0 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:04.669 Found net devices under 0000:86:00.1: cvl_0_1 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:04.669 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:04.670 21:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:04.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:22:04.670 00:22:04.670 --- 10.0.0.2 ping statistics --- 00:22:04.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.670 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:04.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:22:04.670 00:22:04.670 --- 10.0.0.1 ping statistics --- 00:22:04.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.670 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1365743 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1365743 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1365743 ']' 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.670 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:04.670 [2024-12-05 21:15:12.136743] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:22:04.670 [2024-12-05 21:15:12.136787] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.670 [2024-12-05 21:15:12.216737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:04.670 [2024-12-05 21:15:12.259438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.670 [2024-12-05 21:15:12.259476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.670 [2024-12-05 21:15:12.259483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.670 [2024-12-05 21:15:12.259489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.670 [2024-12-05 21:15:12.259494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.670 [2024-12-05 21:15:12.260929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.670 [2024-12-05 21:15:12.260962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:04.670 [2024-12-05 21:15:12.261066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.670 [2024-12-05 21:15:12.261067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:04.929 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.929 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:04.929 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:04.929 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:04.929 21:15:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:04.929 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.929 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:04.929 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.929 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:04.929 [2024-12-05 21:15:13.008878] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.929 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.929 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:04.929 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:04.929 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.929 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:04.929 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:04.929 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:04.929 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:04.929 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:04.929 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:04.929 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:04.929 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:05.187 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.187 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:05.187 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.187 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:05.187 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.187 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:05.187 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.187 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:05.187 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.187 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:05.187 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.187 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:05.187 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.187 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:05.187 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:05.187 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.187 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.187 Malloc1 00:22:05.187 [2024-12-05 21:15:13.118505] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.187 Malloc2 00:22:05.187 Malloc3 00:22:05.187 Malloc4 00:22:05.187 Malloc5 00:22:05.445 Malloc6 00:22:05.445 Malloc7 00:22:05.445 Malloc8 00:22:05.445 Malloc9 00:22:05.445 Malloc10 00:22:05.445 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.445 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:05.445 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:05.445 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.445 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1366024 00:22:05.445 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1366024 /var/tmp/bdevperf.sock 00:22:05.445 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1366024 ']' 00:22:05.445 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:05.445 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:05.445 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:05.445 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.445 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:05.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:05.445 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:05.445 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.445 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:05.445 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.445 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.445 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.445 { 00:22:05.445 "params": { 00:22:05.445 "name": "Nvme$subsystem", 00:22:05.445 "trtype": "$TEST_TRANSPORT", 00:22:05.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.445 "adrfam": "ipv4", 00:22:05.445 "trsvcid": "$NVMF_PORT", 00:22:05.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.445 "hdgst": ${hdgst:-false}, 00:22:05.445 "ddgst": ${ddgst:-false} 00:22:05.445 }, 00:22:05.445 "method": "bdev_nvme_attach_controller" 00:22:05.445 } 00:22:05.445 EOF 00:22:05.445 )") 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.704 { 00:22:05.704 "params": { 00:22:05.704 "name": "Nvme$subsystem", 00:22:05.704 "trtype": "$TEST_TRANSPORT", 00:22:05.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.704 "adrfam": "ipv4", 00:22:05.704 "trsvcid": "$NVMF_PORT", 00:22:05.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.704 "hdgst": ${hdgst:-false}, 00:22:05.704 "ddgst": ${ddgst:-false} 00:22:05.704 }, 00:22:05.704 "method": "bdev_nvme_attach_controller" 00:22:05.704 } 00:22:05.704 EOF 00:22:05.704 )") 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.704 { 00:22:05.704 "params": { 00:22:05.704 "name": "Nvme$subsystem", 00:22:05.704 "trtype": "$TEST_TRANSPORT", 00:22:05.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.704 "adrfam": "ipv4", 00:22:05.704 "trsvcid": "$NVMF_PORT", 00:22:05.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.704 "hdgst": ${hdgst:-false}, 00:22:05.704 "ddgst": ${ddgst:-false} 00:22:05.704 }, 00:22:05.704 "method": "bdev_nvme_attach_controller" 00:22:05.704 } 00:22:05.704 EOF 00:22:05.704 )") 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.704 { 00:22:05.704 "params": { 00:22:05.704 "name": "Nvme$subsystem", 00:22:05.704 "trtype": "$TEST_TRANSPORT", 00:22:05.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.704 "adrfam": "ipv4", 00:22:05.704 "trsvcid": "$NVMF_PORT", 00:22:05.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.704 "hdgst": ${hdgst:-false}, 00:22:05.704 "ddgst": ${ddgst:-false} 00:22:05.704 }, 00:22:05.704 "method": "bdev_nvme_attach_controller" 00:22:05.704 } 00:22:05.704 EOF 00:22:05.704 )") 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.704 { 00:22:05.704 "params": { 00:22:05.704 "name": "Nvme$subsystem", 00:22:05.704 "trtype": "$TEST_TRANSPORT", 00:22:05.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.704 "adrfam": "ipv4", 00:22:05.704 "trsvcid": "$NVMF_PORT", 00:22:05.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.704 "hdgst": ${hdgst:-false}, 00:22:05.704 "ddgst": ${ddgst:-false} 00:22:05.704 }, 00:22:05.704 "method": "bdev_nvme_attach_controller" 00:22:05.704 } 00:22:05.704 EOF 00:22:05.704 )") 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.704 { 00:22:05.704 "params": { 00:22:05.704 "name": "Nvme$subsystem", 00:22:05.704 "trtype": "$TEST_TRANSPORT", 00:22:05.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.704 "adrfam": "ipv4", 00:22:05.704 "trsvcid": "$NVMF_PORT", 00:22:05.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.704 "hdgst": ${hdgst:-false}, 00:22:05.704 "ddgst": ${ddgst:-false} 00:22:05.704 }, 00:22:05.704 "method": "bdev_nvme_attach_controller" 00:22:05.704 } 00:22:05.704 EOF 00:22:05.704 )") 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.704 { 00:22:05.704 "params": { 00:22:05.704 "name": "Nvme$subsystem", 00:22:05.704 "trtype": "$TEST_TRANSPORT", 00:22:05.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.704 "adrfam": "ipv4", 00:22:05.704 "trsvcid": "$NVMF_PORT", 00:22:05.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.704 "hdgst": ${hdgst:-false}, 00:22:05.704 "ddgst": ${ddgst:-false} 00:22:05.704 }, 00:22:05.704 "method": "bdev_nvme_attach_controller" 00:22:05.704 } 00:22:05.704 EOF 00:22:05.704 )") 00:22:05.704 [2024-12-05 21:15:13.591081] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:22:05.704 [2024-12-05 21:15:13.591131] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:05.704 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.705 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.705 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.705 { 00:22:05.705 "params": { 00:22:05.705 "name": "Nvme$subsystem", 00:22:05.705 "trtype": "$TEST_TRANSPORT", 00:22:05.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.705 "adrfam": "ipv4", 00:22:05.705 "trsvcid": "$NVMF_PORT", 00:22:05.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.705 "hdgst": ${hdgst:-false}, 00:22:05.705 "ddgst": ${ddgst:-false} 00:22:05.705 }, 00:22:05.705 "method": "bdev_nvme_attach_controller" 00:22:05.705 } 00:22:05.705 EOF 00:22:05.705 )") 00:22:05.705 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.705 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.705 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.705 { 00:22:05.705 "params": { 00:22:05.705 "name": "Nvme$subsystem", 00:22:05.705 "trtype": "$TEST_TRANSPORT", 00:22:05.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.705 "adrfam": "ipv4", 00:22:05.705 "trsvcid": "$NVMF_PORT", 00:22:05.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.705 "hdgst": ${hdgst:-false}, 00:22:05.705 "ddgst": ${ddgst:-false} 00:22:05.705 }, 00:22:05.705 "method": "bdev_nvme_attach_controller" 00:22:05.705 } 00:22:05.705 EOF 00:22:05.705 )") 00:22:05.705 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.705 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.705 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.705 { 00:22:05.705 "params": { 00:22:05.705 "name": "Nvme$subsystem", 00:22:05.705 "trtype": "$TEST_TRANSPORT", 00:22:05.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.705 "adrfam": "ipv4", 00:22:05.705 "trsvcid": "$NVMF_PORT", 00:22:05.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.705 "hdgst": ${hdgst:-false}, 00:22:05.705 "ddgst": ${ddgst:-false} 00:22:05.705 }, 00:22:05.705 "method": "bdev_nvme_attach_controller" 00:22:05.705 } 00:22:05.705 EOF 00:22:05.705 )") 00:22:05.705 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.705 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:05.705 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:05.705 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:05.705 "params": { 00:22:05.705 "name": "Nvme1", 00:22:05.705 "trtype": "tcp", 00:22:05.705 "traddr": "10.0.0.2", 00:22:05.705 "adrfam": "ipv4", 00:22:05.705 "trsvcid": "4420", 00:22:05.705 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.705 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:05.705 "hdgst": false, 00:22:05.705 "ddgst": false 00:22:05.705 }, 00:22:05.705 "method": "bdev_nvme_attach_controller" 00:22:05.705 },{ 00:22:05.705 "params": { 00:22:05.705 "name": "Nvme2", 00:22:05.705 "trtype": "tcp", 00:22:05.705 "traddr": "10.0.0.2", 00:22:05.705 "adrfam": "ipv4", 00:22:05.705 "trsvcid": "4420", 00:22:05.705 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:05.705 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:05.705 "hdgst": false, 00:22:05.705 "ddgst": false 00:22:05.705 }, 00:22:05.705 "method": "bdev_nvme_attach_controller" 00:22:05.705 },{ 00:22:05.705 "params": { 00:22:05.705 "name": "Nvme3", 00:22:05.705 "trtype": "tcp", 00:22:05.705 "traddr": "10.0.0.2", 00:22:05.705 "adrfam": "ipv4", 00:22:05.705 "trsvcid": "4420", 00:22:05.705 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:05.705 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:05.705 "hdgst": false, 00:22:05.705 "ddgst": false 00:22:05.705 }, 00:22:05.705 "method": "bdev_nvme_attach_controller" 00:22:05.705 },{ 00:22:05.705 "params": { 00:22:05.705 "name": "Nvme4", 00:22:05.705 "trtype": "tcp", 00:22:05.705 "traddr": "10.0.0.2", 00:22:05.705 "adrfam": "ipv4", 00:22:05.705 "trsvcid": "4420", 00:22:05.705 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:05.705 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:05.705 "hdgst": false, 00:22:05.705 "ddgst": false 00:22:05.705 }, 00:22:05.705 "method": "bdev_nvme_attach_controller" 00:22:05.705 },{ 00:22:05.705 "params": { 00:22:05.705 "name": "Nvme5", 00:22:05.705 "trtype": "tcp", 00:22:05.705 "traddr": "10.0.0.2", 00:22:05.705 "adrfam": "ipv4", 00:22:05.705 "trsvcid": "4420", 00:22:05.705 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:05.705 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:05.705 "hdgst": false, 00:22:05.705 "ddgst": false 00:22:05.705 }, 00:22:05.705 "method": "bdev_nvme_attach_controller" 00:22:05.705 },{ 00:22:05.705 "params": { 00:22:05.705 "name": "Nvme6", 00:22:05.705 "trtype": "tcp", 00:22:05.705 "traddr": "10.0.0.2", 00:22:05.705 "adrfam": "ipv4", 00:22:05.705 "trsvcid": "4420", 00:22:05.705 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:05.705 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:05.705 "hdgst": false, 00:22:05.705 "ddgst": false 00:22:05.705 }, 00:22:05.705 "method": "bdev_nvme_attach_controller" 00:22:05.705 },{ 00:22:05.705 "params": { 00:22:05.705 "name": "Nvme7", 00:22:05.705 "trtype": "tcp", 00:22:05.705 "traddr": "10.0.0.2", 00:22:05.705 "adrfam": "ipv4", 00:22:05.705 "trsvcid": "4420", 00:22:05.705 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:05.705 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:05.705 "hdgst": false, 00:22:05.705 "ddgst": false 00:22:05.705 }, 00:22:05.705 "method": "bdev_nvme_attach_controller" 00:22:05.705 },{ 00:22:05.705 "params": { 00:22:05.705 "name": "Nvme8", 00:22:05.705 "trtype": "tcp", 00:22:05.705 "traddr": "10.0.0.2", 00:22:05.705 "adrfam": "ipv4", 00:22:05.705 "trsvcid": "4420", 00:22:05.705 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:05.705 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:05.705 "hdgst": false, 00:22:05.705 "ddgst": false 00:22:05.705 }, 00:22:05.705 "method": "bdev_nvme_attach_controller" 00:22:05.705 },{ 00:22:05.705 "params": { 00:22:05.705 "name": "Nvme9", 00:22:05.705 "trtype": "tcp", 00:22:05.705 "traddr": "10.0.0.2", 00:22:05.705 "adrfam": "ipv4", 00:22:05.705 "trsvcid": "4420", 00:22:05.705 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:05.705 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:05.705 "hdgst": false, 00:22:05.705 "ddgst": false 00:22:05.705 }, 00:22:05.705 "method": "bdev_nvme_attach_controller" 00:22:05.705 },{ 00:22:05.705 "params": { 00:22:05.705 "name": "Nvme10", 00:22:05.705 "trtype": "tcp", 00:22:05.705 "traddr": "10.0.0.2", 00:22:05.705 "adrfam": "ipv4", 00:22:05.705 "trsvcid": "4420", 00:22:05.705 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:05.705 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:05.705 "hdgst": false, 00:22:05.705 "ddgst": false 00:22:05.705 }, 00:22:05.705 "method": "bdev_nvme_attach_controller" 00:22:05.705 }' 00:22:05.705 [2024-12-05 21:15:13.667599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.705 [2024-12-05 21:15:13.708420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.607 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.607 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:07.607 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:07.607 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.607 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:07.607 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.607 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1366024 00:22:07.607 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:07.607 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:08.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1366024 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1365743 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.542 { 00:22:08.542 "params": { 00:22:08.542 "name": "Nvme$subsystem", 00:22:08.542 "trtype": "$TEST_TRANSPORT", 00:22:08.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.542 "adrfam": "ipv4", 00:22:08.542 "trsvcid": "$NVMF_PORT", 00:22:08.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.542 "hdgst": ${hdgst:-false}, 00:22:08.542 "ddgst": ${ddgst:-false} 00:22:08.542 }, 00:22:08.542 "method": "bdev_nvme_attach_controller" 00:22:08.542 } 00:22:08.542 EOF 00:22:08.542 )") 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.542 { 00:22:08.542 "params": { 00:22:08.542 "name": "Nvme$subsystem", 00:22:08.542 "trtype": "$TEST_TRANSPORT", 00:22:08.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.542 "adrfam": "ipv4", 00:22:08.542 "trsvcid": "$NVMF_PORT", 00:22:08.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.542 "hdgst": ${hdgst:-false}, 00:22:08.542 "ddgst": ${ddgst:-false} 00:22:08.542 }, 00:22:08.542 "method": "bdev_nvme_attach_controller" 00:22:08.542 } 00:22:08.542 EOF 00:22:08.542 )") 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.542 { 00:22:08.542 "params": { 00:22:08.542 "name": "Nvme$subsystem", 00:22:08.542 "trtype": "$TEST_TRANSPORT", 00:22:08.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.542 "adrfam": "ipv4", 00:22:08.542 "trsvcid": "$NVMF_PORT", 00:22:08.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.542 "hdgst": ${hdgst:-false}, 00:22:08.542 "ddgst": ${ddgst:-false} 00:22:08.542 }, 00:22:08.542 "method": "bdev_nvme_attach_controller" 00:22:08.542 } 00:22:08.542 EOF 00:22:08.542 )") 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.542 { 00:22:08.542 "params": { 00:22:08.542 "name": "Nvme$subsystem", 00:22:08.542 "trtype": "$TEST_TRANSPORT", 00:22:08.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.542 "adrfam": "ipv4", 00:22:08.542 "trsvcid": "$NVMF_PORT", 00:22:08.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.542 "hdgst": ${hdgst:-false}, 00:22:08.542 "ddgst": ${ddgst:-false} 00:22:08.542 }, 00:22:08.542 "method": "bdev_nvme_attach_controller" 00:22:08.542 } 00:22:08.542 EOF 00:22:08.542 )") 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.542 { 00:22:08.542 "params": { 00:22:08.542 "name": "Nvme$subsystem", 00:22:08.542 "trtype": "$TEST_TRANSPORT", 00:22:08.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.542 "adrfam": "ipv4", 00:22:08.542 "trsvcid": "$NVMF_PORT", 00:22:08.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.542 "hdgst": ${hdgst:-false}, 00:22:08.542 "ddgst": ${ddgst:-false} 00:22:08.542 }, 00:22:08.542 "method": "bdev_nvme_attach_controller" 00:22:08.542 } 00:22:08.542 EOF 00:22:08.542 )") 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.542 { 00:22:08.542 "params": { 00:22:08.542 "name": "Nvme$subsystem", 00:22:08.542 "trtype": "$TEST_TRANSPORT", 00:22:08.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.542 "adrfam": "ipv4", 00:22:08.542 "trsvcid": "$NVMF_PORT", 00:22:08.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.542 "hdgst": ${hdgst:-false}, 00:22:08.542 "ddgst": ${ddgst:-false} 00:22:08.542 }, 00:22:08.542 "method": "bdev_nvme_attach_controller" 00:22:08.542 } 00:22:08.542 EOF 00:22:08.542 )") 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.542 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.542 { 00:22:08.542 "params": { 00:22:08.542 "name": "Nvme$subsystem", 00:22:08.542 "trtype": "$TEST_TRANSPORT", 00:22:08.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.542 "adrfam": "ipv4", 00:22:08.542 "trsvcid": "$NVMF_PORT", 00:22:08.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.542 "hdgst": ${hdgst:-false}, 00:22:08.542 "ddgst": ${ddgst:-false} 00:22:08.542 }, 00:22:08.543 "method": "bdev_nvme_attach_controller" 00:22:08.543 } 00:22:08.543 EOF 00:22:08.543 )") 00:22:08.543 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:08.543 [2024-12-05 21:15:16.519511] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:22:08.543 [2024-12-05 21:15:16.519562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1366514 ] 00:22:08.543 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.543 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.543 { 00:22:08.543 "params": { 00:22:08.543 "name": "Nvme$subsystem", 00:22:08.543 "trtype": "$TEST_TRANSPORT", 00:22:08.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.543 "adrfam": "ipv4", 00:22:08.543 "trsvcid": "$NVMF_PORT", 00:22:08.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.543 "hdgst": ${hdgst:-false}, 00:22:08.543 "ddgst": ${ddgst:-false} 00:22:08.543 }, 00:22:08.543 "method": "bdev_nvme_attach_controller" 00:22:08.543 } 00:22:08.543 EOF 00:22:08.543 )") 00:22:08.543 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:08.543 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.543 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.543 { 00:22:08.543 "params": { 00:22:08.543 "name": "Nvme$subsystem", 00:22:08.543 "trtype": "$TEST_TRANSPORT", 00:22:08.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.543 "adrfam": "ipv4", 00:22:08.543 "trsvcid": "$NVMF_PORT", 00:22:08.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.543 "hdgst": ${hdgst:-false}, 00:22:08.543 "ddgst": ${ddgst:-false} 00:22:08.543 }, 00:22:08.543 "method": "bdev_nvme_attach_controller" 00:22:08.543 } 00:22:08.543 EOF 00:22:08.543 )") 00:22:08.543 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:08.543 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.543 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.543 { 00:22:08.543 "params": { 00:22:08.543 "name": "Nvme$subsystem", 00:22:08.543 "trtype": "$TEST_TRANSPORT", 00:22:08.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.543 "adrfam": "ipv4", 00:22:08.543 "trsvcid": "$NVMF_PORT", 00:22:08.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.543 "hdgst": ${hdgst:-false}, 00:22:08.543 "ddgst": ${ddgst:-false} 00:22:08.543 }, 00:22:08.543 "method": "bdev_nvme_attach_controller" 00:22:08.543 } 00:22:08.543 EOF 00:22:08.543 )") 00:22:08.543 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:08.543 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:08.543 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:08.543 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:08.543 "params": { 00:22:08.543 "name": "Nvme1", 00:22:08.543 "trtype": "tcp", 00:22:08.543 "traddr": "10.0.0.2", 00:22:08.543 "adrfam": "ipv4", 00:22:08.543 "trsvcid": "4420", 00:22:08.543 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.543 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:08.543 "hdgst": false, 00:22:08.543 "ddgst": false 00:22:08.543 }, 00:22:08.543 "method": "bdev_nvme_attach_controller" 00:22:08.543 },{ 00:22:08.543 "params": { 00:22:08.543 "name": "Nvme2", 00:22:08.543 "trtype": "tcp", 00:22:08.543 "traddr": "10.0.0.2", 00:22:08.543 "adrfam": "ipv4", 00:22:08.543 "trsvcid": "4420", 00:22:08.543 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:08.543 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:08.543 "hdgst": false, 00:22:08.543 "ddgst": false 00:22:08.543 }, 00:22:08.543 "method": "bdev_nvme_attach_controller" 00:22:08.543 },{ 00:22:08.543 "params": { 00:22:08.543 "name": "Nvme3", 00:22:08.543 "trtype": "tcp", 00:22:08.543 "traddr": "10.0.0.2", 00:22:08.543 "adrfam": "ipv4", 00:22:08.543 "trsvcid": "4420", 00:22:08.543 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:08.543 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:08.543 "hdgst": false, 00:22:08.543 "ddgst": false 00:22:08.543 }, 00:22:08.543 "method": "bdev_nvme_attach_controller" 00:22:08.543 },{ 00:22:08.543 "params": { 00:22:08.543 "name": "Nvme4", 00:22:08.543 "trtype": "tcp", 00:22:08.543 "traddr": "10.0.0.2", 00:22:08.543 "adrfam": "ipv4", 00:22:08.543 "trsvcid": "4420", 00:22:08.543 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:08.543 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:08.543 "hdgst": false, 00:22:08.543 "ddgst": false 00:22:08.543 }, 00:22:08.543 "method": "bdev_nvme_attach_controller" 00:22:08.543 },{ 00:22:08.543 "params": { 00:22:08.543 "name": "Nvme5", 00:22:08.543 "trtype": "tcp", 00:22:08.543 "traddr": "10.0.0.2", 00:22:08.543 "adrfam": "ipv4", 00:22:08.543 "trsvcid": "4420", 00:22:08.543 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:08.543 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:08.543 "hdgst": false, 00:22:08.543 "ddgst": false 00:22:08.543 }, 00:22:08.543 "method": "bdev_nvme_attach_controller" 00:22:08.543 },{ 00:22:08.543 "params": { 00:22:08.543 "name": "Nvme6", 00:22:08.543 "trtype": "tcp", 00:22:08.543 "traddr": "10.0.0.2", 00:22:08.543 "adrfam": "ipv4", 00:22:08.543 "trsvcid": "4420", 00:22:08.543 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:08.543 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:08.543 "hdgst": false, 00:22:08.543 "ddgst": false 00:22:08.543 }, 00:22:08.543 "method": "bdev_nvme_attach_controller" 00:22:08.543 },{ 00:22:08.543 "params": { 00:22:08.543 "name": "Nvme7", 00:22:08.543 "trtype": "tcp", 00:22:08.543 "traddr": "10.0.0.2", 00:22:08.543 "adrfam": "ipv4", 00:22:08.543 "trsvcid": "4420", 00:22:08.543 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:08.543 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:08.543 "hdgst": false, 00:22:08.543 "ddgst": false 00:22:08.543 }, 00:22:08.543 "method": "bdev_nvme_attach_controller" 00:22:08.543 },{ 00:22:08.543 "params": { 00:22:08.543 "name": "Nvme8", 00:22:08.543 "trtype": "tcp", 00:22:08.544 "traddr": "10.0.0.2", 00:22:08.544 "adrfam": "ipv4", 00:22:08.544 "trsvcid": "4420", 00:22:08.544 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:08.544 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:08.544 "hdgst": false, 00:22:08.544 "ddgst": false 00:22:08.544 }, 00:22:08.544 "method": "bdev_nvme_attach_controller" 00:22:08.544 },{ 00:22:08.544 "params": { 00:22:08.544 "name": "Nvme9", 00:22:08.544 "trtype": "tcp", 00:22:08.544 "traddr": "10.0.0.2", 00:22:08.544 "adrfam": "ipv4", 00:22:08.544 "trsvcid": "4420", 00:22:08.544 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:08.544 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:08.544 "hdgst": false, 00:22:08.544 "ddgst": false 00:22:08.544 }, 00:22:08.544 "method": "bdev_nvme_attach_controller" 00:22:08.544 },{ 00:22:08.544 "params": { 00:22:08.544 "name": "Nvme10", 00:22:08.544 "trtype": "tcp", 00:22:08.544 "traddr": "10.0.0.2", 00:22:08.544 "adrfam": "ipv4", 00:22:08.544 "trsvcid": "4420", 00:22:08.544 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:08.544 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:08.544 "hdgst": false, 00:22:08.544 "ddgst": false 00:22:08.544 }, 00:22:08.544 "method": "bdev_nvme_attach_controller" 00:22:08.544 }' 00:22:08.544 [2024-12-05 21:15:16.595662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.544 [2024-12-05 21:15:16.636537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.443 Running I/O for 1 seconds... 00:22:11.379 2256.00 IOPS, 141.00 MiB/s 00:22:11.379 Latency(us) 00:22:11.379 [2024-12-05T20:15:19.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.379 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.379 Verification LBA range: start 0x0 length 0x400 00:22:11.379 Nvme1n1 : 1.13 289.17 18.07 0.00 0.00 218010.62 5867.03 210713.84 00:22:11.379 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.379 Verification LBA range: start 0x0 length 0x400 00:22:11.379 Nvme2n1 : 1.14 279.79 17.49 0.00 0.00 223621.71 16352.79 229688.08 00:22:11.379 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.379 Verification LBA range: start 0x0 length 0x400 00:22:11.379 Nvme3n1 : 1.14 285.43 17.84 0.00 0.00 214784.90 6491.18 218702.99 00:22:11.379 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.379 Verification LBA range: start 0x0 length 0x400 00:22:11.379 Nvme4n1 : 1.13 282.57 17.66 0.00 0.00 215238.36 16103.13 209715.20 00:22:11.379 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.379 Verification LBA range: start 0x0 length 0x400 00:22:11.379 Nvme5n1 : 1.08 237.38 14.84 0.00 0.00 251727.73 14667.58 225693.50 00:22:11.379 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.379 Verification LBA range: start 0x0 length 0x400 00:22:11.379 Nvme6n1 : 1.15 280.76 17.55 0.00 0.00 210674.06 6803.26 232684.01 00:22:11.379 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.379 Verification LBA range: start 0x0 length 0x400 00:22:11.379 Nvme7n1 : 1.15 278.14 17.38 0.00 0.00 209530.68 12170.97 229688.08 00:22:11.379 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.379 Verification LBA range: start 0x0 length 0x400 00:22:11.379 Nvme8n1 : 1.15 288.05 18.00 0.00 0.00 198174.24 4868.39 196732.83 00:22:11.379 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.379 Verification LBA range: start 0x0 length 0x400 00:22:11.379 Nvme9n1 : 1.16 275.78 17.24 0.00 0.00 205261.00 15728.64 224694.86 00:22:11.379 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.379 Verification LBA range: start 0x0 length 0x400 00:22:11.379 Nvme10n1 : 1.16 276.29 17.27 0.00 0.00 201829.23 16852.11 232684.01 00:22:11.379 [2024-12-05T20:15:19.487Z] =================================================================================================================== 00:22:11.379 [2024-12-05T20:15:19.487Z] Total : 2773.37 173.34 0.00 0.00 214089.15 4868.39 232684.01 00:22:11.379 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:11.379 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:11.379 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:11.379 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:11.379 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:11.379 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:11.379 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:11.379 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:11.379 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:11.379 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:11.379 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:11.639 rmmod nvme_tcp 00:22:11.639 rmmod nvme_fabrics 00:22:11.639 rmmod nvme_keyring 00:22:11.639 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:11.639 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:11.639 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:11.639 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1365743 ']' 00:22:11.639 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1365743 00:22:11.639 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1365743 ']' 00:22:11.639 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1365743 00:22:11.639 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:11.639 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.639 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1365743 00:22:11.639 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:11.639 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:11.639 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1365743' 00:22:11.639 killing process with pid 1365743 00:22:11.639 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1365743 00:22:11.639 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1365743 00:22:11.899 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:11.899 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:11.899 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:11.899 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:11.899 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:11.899 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:11.899 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:11.899 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:11.899 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:11.899 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.899 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.899 21:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.979 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:13.979 00:22:13.979 real 0m15.961s 00:22:13.979 user 0m36.860s 00:22:13.979 sys 0m5.856s 00:22:13.979 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.979 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:13.979 ************************************ 00:22:13.979 END TEST nvmf_shutdown_tc1 00:22:13.979 ************************************ 00:22:13.979 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:13.979 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:13.979 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.979 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:14.238 ************************************ 00:22:14.238 START TEST nvmf_shutdown_tc2 00:22:14.238 ************************************ 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.238 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:14.239 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:14.239 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:14.239 Found net devices under 0000:86:00.0: cvl_0_0 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:14.239 Found net devices under 0000:86:00.1: cvl_0_1 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:14.239 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:14.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:22:14.499 00:22:14.499 --- 10.0.0.2 ping statistics --- 00:22:14.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.499 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:22:14.499 00:22:14.499 --- 10.0.0.1 ping statistics --- 00:22:14.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.499 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1367561 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1367561 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1367561 ']' 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.499 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.499 [2024-12-05 21:15:22.472454] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:22:14.499 [2024-12-05 21:15:22.472500] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.499 [2024-12-05 21:15:22.550956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:14.499 [2024-12-05 21:15:22.592798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.499 [2024-12-05 21:15:22.592835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.499 [2024-12-05 21:15:22.592842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.499 [2024-12-05 21:15:22.592848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.499 [2024-12-05 21:15:22.592854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.499 [2024-12-05 21:15:22.594330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.499 [2024-12-05 21:15:22.594441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.499 [2024-12-05 21:15:22.594547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:14.499 [2024-12-05 21:15:22.594546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.758 [2024-12-05 21:15:22.732654] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.758 21:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.758 Malloc1 00:22:14.758 [2024-12-05 21:15:22.854041] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.017 Malloc2 00:22:15.017 Malloc3 00:22:15.017 Malloc4 00:22:15.017 Malloc5 00:22:15.017 Malloc6 00:22:15.017 Malloc7 00:22:15.276 Malloc8 00:22:15.276 Malloc9 00:22:15.276 Malloc10 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1367816 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1367816 /var/tmp/bdevperf.sock 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1367816 ']' 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.276 { 00:22:15.276 "params": { 00:22:15.276 "name": "Nvme$subsystem", 00:22:15.276 "trtype": "$TEST_TRANSPORT", 00:22:15.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.276 "adrfam": "ipv4", 00:22:15.276 "trsvcid": "$NVMF_PORT", 00:22:15.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.276 "hdgst": ${hdgst:-false}, 00:22:15.276 "ddgst": ${ddgst:-false} 00:22:15.276 }, 00:22:15.276 "method": "bdev_nvme_attach_controller" 00:22:15.276 } 00:22:15.276 EOF 00:22:15.276 )") 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.276 { 00:22:15.276 "params": { 00:22:15.276 "name": "Nvme$subsystem", 00:22:15.276 "trtype": "$TEST_TRANSPORT", 00:22:15.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.276 "adrfam": "ipv4", 00:22:15.276 "trsvcid": "$NVMF_PORT", 00:22:15.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.276 "hdgst": ${hdgst:-false}, 00:22:15.276 "ddgst": ${ddgst:-false} 00:22:15.276 }, 00:22:15.276 "method": "bdev_nvme_attach_controller" 00:22:15.276 } 00:22:15.276 EOF 00:22:15.276 )") 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.276 { 00:22:15.276 "params": { 00:22:15.276 "name": "Nvme$subsystem", 00:22:15.276 "trtype": "$TEST_TRANSPORT", 00:22:15.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.276 "adrfam": "ipv4", 00:22:15.276 "trsvcid": "$NVMF_PORT", 00:22:15.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.276 "hdgst": ${hdgst:-false}, 00:22:15.276 "ddgst": ${ddgst:-false} 00:22:15.276 }, 00:22:15.276 "method": "bdev_nvme_attach_controller" 00:22:15.276 } 00:22:15.276 EOF 00:22:15.276 )") 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.276 { 00:22:15.276 "params": { 00:22:15.276 "name": "Nvme$subsystem", 00:22:15.276 "trtype": "$TEST_TRANSPORT", 00:22:15.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.276 "adrfam": "ipv4", 00:22:15.276 "trsvcid": "$NVMF_PORT", 00:22:15.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.276 "hdgst": ${hdgst:-false}, 00:22:15.276 "ddgst": ${ddgst:-false} 00:22:15.276 }, 00:22:15.276 "method": "bdev_nvme_attach_controller" 00:22:15.276 } 00:22:15.276 EOF 00:22:15.276 )") 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.276 { 00:22:15.276 "params": { 00:22:15.276 "name": "Nvme$subsystem", 00:22:15.276 "trtype": "$TEST_TRANSPORT", 00:22:15.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.276 "adrfam": "ipv4", 00:22:15.276 "trsvcid": "$NVMF_PORT", 00:22:15.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.276 "hdgst": ${hdgst:-false}, 00:22:15.276 "ddgst": ${ddgst:-false} 00:22:15.276 }, 00:22:15.276 "method": "bdev_nvme_attach_controller" 00:22:15.276 } 00:22:15.276 EOF 00:22:15.276 )") 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.276 { 00:22:15.276 "params": { 00:22:15.276 "name": "Nvme$subsystem", 00:22:15.276 "trtype": "$TEST_TRANSPORT", 00:22:15.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.276 "adrfam": "ipv4", 00:22:15.276 "trsvcid": "$NVMF_PORT", 00:22:15.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.276 "hdgst": ${hdgst:-false}, 00:22:15.276 "ddgst": ${ddgst:-false} 00:22:15.276 }, 00:22:15.276 "method": "bdev_nvme_attach_controller" 00:22:15.276 } 00:22:15.276 EOF 00:22:15.276 )") 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.276 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.276 { 00:22:15.276 "params": { 00:22:15.276 "name": "Nvme$subsystem", 00:22:15.276 "trtype": "$TEST_TRANSPORT", 00:22:15.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.276 "adrfam": "ipv4", 00:22:15.276 "trsvcid": "$NVMF_PORT", 00:22:15.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.276 "hdgst": ${hdgst:-false}, 00:22:15.277 "ddgst": ${ddgst:-false} 00:22:15.277 }, 00:22:15.277 "method": "bdev_nvme_attach_controller" 00:22:15.277 } 00:22:15.277 EOF 00:22:15.277 )") 00:22:15.277 [2024-12-05 21:15:23.325010] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:22:15.277 [2024-12-05 21:15:23.325056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1367816 ] 00:22:15.277 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:15.277 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.277 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.277 { 00:22:15.277 "params": { 00:22:15.277 "name": "Nvme$subsystem", 00:22:15.277 "trtype": "$TEST_TRANSPORT", 00:22:15.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.277 "adrfam": "ipv4", 00:22:15.277 "trsvcid": "$NVMF_PORT", 00:22:15.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.277 "hdgst": ${hdgst:-false}, 00:22:15.277 "ddgst": ${ddgst:-false} 00:22:15.277 }, 00:22:15.277 "method": "bdev_nvme_attach_controller" 00:22:15.277 } 00:22:15.277 EOF 00:22:15.277 )") 00:22:15.277 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:15.277 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.277 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.277 { 00:22:15.277 "params": { 00:22:15.277 "name": "Nvme$subsystem", 00:22:15.277 "trtype": "$TEST_TRANSPORT", 00:22:15.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.277 "adrfam": "ipv4", 00:22:15.277 "trsvcid": "$NVMF_PORT", 00:22:15.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.277 "hdgst": ${hdgst:-false}, 00:22:15.277 "ddgst": ${ddgst:-false} 00:22:15.277 }, 00:22:15.277 "method": "bdev_nvme_attach_controller" 00:22:15.277 } 00:22:15.277 EOF 00:22:15.277 )") 00:22:15.277 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:15.277 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.277 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.277 { 00:22:15.277 "params": { 00:22:15.277 "name": "Nvme$subsystem", 00:22:15.277 "trtype": "$TEST_TRANSPORT", 00:22:15.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.277 "adrfam": "ipv4", 00:22:15.277 "trsvcid": "$NVMF_PORT", 00:22:15.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.277 "hdgst": ${hdgst:-false}, 00:22:15.277 "ddgst": ${ddgst:-false} 00:22:15.277 }, 00:22:15.277 "method": "bdev_nvme_attach_controller" 00:22:15.277 } 00:22:15.277 EOF 00:22:15.277 )") 00:22:15.277 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:15.277 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:15.277 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:15.277 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:15.277 "params": { 00:22:15.277 "name": "Nvme1", 00:22:15.277 "trtype": "tcp", 00:22:15.277 "traddr": "10.0.0.2", 00:22:15.277 "adrfam": "ipv4", 00:22:15.277 "trsvcid": "4420", 00:22:15.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.277 "hdgst": false, 00:22:15.277 "ddgst": false 00:22:15.277 }, 00:22:15.277 "method": "bdev_nvme_attach_controller" 00:22:15.277 },{ 00:22:15.277 "params": { 00:22:15.277 "name": "Nvme2", 00:22:15.277 "trtype": "tcp", 00:22:15.277 "traddr": "10.0.0.2", 00:22:15.277 "adrfam": "ipv4", 00:22:15.277 "trsvcid": "4420", 00:22:15.277 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:15.277 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:15.277 "hdgst": false, 00:22:15.277 "ddgst": false 00:22:15.277 }, 00:22:15.277 "method": "bdev_nvme_attach_controller" 00:22:15.277 },{ 00:22:15.277 "params": { 00:22:15.277 "name": "Nvme3", 00:22:15.277 "trtype": "tcp", 00:22:15.277 "traddr": "10.0.0.2", 00:22:15.277 "adrfam": "ipv4", 00:22:15.277 "trsvcid": "4420", 00:22:15.277 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:15.277 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:15.277 "hdgst": false, 00:22:15.277 "ddgst": false 00:22:15.277 }, 00:22:15.277 "method": "bdev_nvme_attach_controller" 00:22:15.277 },{ 00:22:15.277 "params": { 00:22:15.277 "name": "Nvme4", 00:22:15.277 "trtype": "tcp", 00:22:15.277 "traddr": "10.0.0.2", 00:22:15.277 "adrfam": "ipv4", 00:22:15.277 "trsvcid": "4420", 00:22:15.277 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:15.277 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:15.277 "hdgst": false, 00:22:15.277 "ddgst": false 00:22:15.277 }, 00:22:15.277 "method": "bdev_nvme_attach_controller" 00:22:15.277 },{ 00:22:15.277 "params": { 00:22:15.277 "name": "Nvme5", 00:22:15.277 "trtype": "tcp", 00:22:15.277 "traddr": "10.0.0.2", 00:22:15.277 "adrfam": "ipv4", 00:22:15.277 "trsvcid": "4420", 00:22:15.277 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:15.277 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:15.277 "hdgst": false, 00:22:15.277 "ddgst": false 00:22:15.277 }, 00:22:15.277 "method": "bdev_nvme_attach_controller" 00:22:15.277 },{ 00:22:15.277 "params": { 00:22:15.277 "name": "Nvme6", 00:22:15.277 "trtype": "tcp", 00:22:15.277 "traddr": "10.0.0.2", 00:22:15.277 "adrfam": "ipv4", 00:22:15.277 "trsvcid": "4420", 00:22:15.277 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:15.277 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:15.277 "hdgst": false, 00:22:15.277 "ddgst": false 00:22:15.277 }, 00:22:15.277 "method": "bdev_nvme_attach_controller" 00:22:15.277 },{ 00:22:15.277 "params": { 00:22:15.277 "name": "Nvme7", 00:22:15.277 "trtype": "tcp", 00:22:15.277 "traddr": "10.0.0.2", 00:22:15.277 "adrfam": "ipv4", 00:22:15.277 "trsvcid": "4420", 00:22:15.277 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:15.277 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:15.277 "hdgst": false, 00:22:15.277 "ddgst": false 00:22:15.277 }, 00:22:15.277 "method": "bdev_nvme_attach_controller" 00:22:15.277 },{ 00:22:15.277 "params": { 00:22:15.277 "name": "Nvme8", 00:22:15.277 "trtype": "tcp", 00:22:15.277 "traddr": "10.0.0.2", 00:22:15.277 "adrfam": "ipv4", 00:22:15.277 "trsvcid": "4420", 00:22:15.277 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:15.277 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:15.277 "hdgst": false, 00:22:15.277 "ddgst": false 00:22:15.277 }, 00:22:15.277 "method": "bdev_nvme_attach_controller" 00:22:15.277 },{ 00:22:15.277 "params": { 00:22:15.277 "name": "Nvme9", 00:22:15.277 "trtype": "tcp", 00:22:15.277 "traddr": "10.0.0.2", 00:22:15.277 "adrfam": "ipv4", 00:22:15.277 "trsvcid": "4420", 00:22:15.277 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:15.277 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:15.277 "hdgst": false, 00:22:15.277 "ddgst": false 00:22:15.277 }, 00:22:15.277 "method": "bdev_nvme_attach_controller" 00:22:15.277 },{ 00:22:15.277 "params": { 00:22:15.277 "name": "Nvme10", 00:22:15.277 "trtype": "tcp", 00:22:15.277 "traddr": "10.0.0.2", 00:22:15.277 "adrfam": "ipv4", 00:22:15.277 "trsvcid": "4420", 00:22:15.277 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:15.277 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:15.277 "hdgst": false, 00:22:15.277 "ddgst": false 00:22:15.277 }, 00:22:15.277 "method": "bdev_nvme_attach_controller" 00:22:15.277 }' 00:22:15.535 [2024-12-05 21:15:23.401432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.535 [2024-12-05 21:15:23.443015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.910 Running I/O for 10 seconds... 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:16.910 21:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:17.168 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:17.168 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:17.168 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:17.168 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:17.168 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.168 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:17.168 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.168 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:17.168 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:17.168 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:17.426 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:17.426 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:17.426 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:17.427 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:17.427 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.427 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:17.427 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.427 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:17.427 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:17.427 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:17.427 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:17.427 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:17.427 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1367816 00:22:17.427 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1367816 ']' 00:22:17.427 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1367816 00:22:17.427 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:17.427 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:17.427 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1367816 00:22:17.685 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:17.685 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:17.685 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1367816' 00:22:17.685 killing process with pid 1367816 00:22:17.685 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1367816 00:22:17.685 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1367816 00:22:17.685 Received shutdown signal, test time was about 0.929186 seconds 00:22:17.685 00:22:17.685 Latency(us) 00:22:17.685 [2024-12-05T20:15:25.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.685 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.685 Verification LBA range: start 0x0 length 0x400 00:22:17.685 Nvme1n1 : 0.92 278.61 17.41 0.00 0.00 227338.97 16227.96 223696.21 00:22:17.685 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.685 Verification LBA range: start 0x0 length 0x400 00:22:17.685 Nvme2n1 : 0.92 277.46 17.34 0.00 0.00 224444.71 17476.27 221698.93 00:22:17.685 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.685 Verification LBA range: start 0x0 length 0x400 00:22:17.685 Nvme3n1 : 0.90 283.00 17.69 0.00 0.00 216108.62 19348.72 227690.79 00:22:17.685 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.685 Verification LBA range: start 0x0 length 0x400 00:22:17.685 Nvme4n1 : 0.89 297.54 18.60 0.00 0.00 199604.17 4837.18 215707.06 00:22:17.685 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.685 Verification LBA range: start 0x0 length 0x400 00:22:17.685 Nvme5n1 : 0.91 282.28 17.64 0.00 0.00 208490.79 16477.62 218702.99 00:22:17.685 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.685 Verification LBA range: start 0x0 length 0x400 00:22:17.685 Nvme6n1 : 0.90 223.44 13.97 0.00 0.00 255125.35 3386.03 226692.14 00:22:17.685 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.685 Verification LBA range: start 0x0 length 0x400 00:22:17.685 Nvme7n1 : 0.90 310.95 19.43 0.00 0.00 180219.10 5430.13 184749.10 00:22:17.685 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.685 Verification LBA range: start 0x0 length 0x400 00:22:17.685 Nvme8n1 : 0.91 281.11 17.57 0.00 0.00 198241.28 15666.22 183750.46 00:22:17.685 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.685 Verification LBA range: start 0x0 length 0x400 00:22:17.685 Nvme9n1 : 0.93 275.70 17.23 0.00 0.00 198182.28 17476.27 242670.45 00:22:17.685 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.685 Verification LBA range: start 0x0 length 0x400 00:22:17.686 Nvme10n1 : 0.92 279.34 17.46 0.00 0.00 192035.84 18599.74 204721.98 00:22:17.686 [2024-12-05T20:15:25.794Z] =================================================================================================================== 00:22:17.686 [2024-12-05T20:15:25.794Z] Total : 2789.44 174.34 0.00 0.00 208678.98 3386.03 242670.45 00:22:17.944 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1367561 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:18.880 rmmod nvme_tcp 00:22:18.880 rmmod nvme_fabrics 00:22:18.880 rmmod nvme_keyring 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1367561 ']' 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1367561 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1367561 ']' 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1367561 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1367561 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1367561' 00:22:18.880 killing process with pid 1367561 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1367561 00:22:18.880 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1367561 00:22:19.448 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:19.448 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:19.448 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:19.448 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:19.448 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:19.448 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:19.448 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:19.448 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:19.448 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:19.448 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.448 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.448 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.348 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:21.348 00:22:21.348 real 0m7.287s 00:22:21.348 user 0m21.199s 00:22:21.348 sys 0m1.388s 00:22:21.348 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:21.348 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:21.348 ************************************ 00:22:21.348 END TEST nvmf_shutdown_tc2 00:22:21.348 ************************************ 00:22:21.348 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:21.348 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:21.348 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:21.348 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:21.606 ************************************ 00:22:21.606 START TEST nvmf_shutdown_tc3 00:22:21.606 ************************************ 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:21.606 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:21.606 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:21.606 Found net devices under 0000:86:00.0: cvl_0_0 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:21.606 Found net devices under 0000:86:00.1: cvl_0_1 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:21.606 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:21.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:22:21.865 00:22:21.865 --- 10.0.0.2 ping statistics --- 00:22:21.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.865 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:22:21.865 00:22:21.865 --- 10.0.0.1 ping statistics --- 00:22:21.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.865 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1369010 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1369010 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1369010 ']' 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.865 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:21.865 [2024-12-05 21:15:29.846083] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:22:21.865 [2024-12-05 21:15:29.846131] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.865 [2024-12-05 21:15:29.926987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:21.865 [2024-12-05 21:15:29.968495] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.865 [2024-12-05 21:15:29.968531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.865 [2024-12-05 21:15:29.968537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.865 [2024-12-05 21:15:29.968544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.865 [2024-12-05 21:15:29.968549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.865 [2024-12-05 21:15:29.969986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.865 [2024-12-05 21:15:29.970076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.865 [2024-12-05 21:15:29.970185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:21.865 [2024-12-05 21:15:29.970184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.797 [2024-12-05 21:15:30.719345] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.797 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.797 Malloc1 00:22:22.797 [2024-12-05 21:15:30.828591] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.797 Malloc2 00:22:22.797 Malloc3 00:22:23.070 Malloc4 00:22:23.070 Malloc5 00:22:23.071 Malloc6 00:22:23.071 Malloc7 00:22:23.071 Malloc8 00:22:23.071 Malloc9 00:22:23.330 Malloc10 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1369318 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1369318 /var/tmp/bdevperf.sock 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1369318 ']' 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:23.330 { 00:22:23.330 "params": { 00:22:23.330 "name": "Nvme$subsystem", 00:22:23.330 "trtype": "$TEST_TRANSPORT", 00:22:23.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.330 "adrfam": "ipv4", 00:22:23.330 "trsvcid": "$NVMF_PORT", 00:22:23.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.330 "hdgst": ${hdgst:-false}, 00:22:23.330 "ddgst": ${ddgst:-false} 00:22:23.330 }, 00:22:23.330 "method": "bdev_nvme_attach_controller" 00:22:23.330 } 00:22:23.330 EOF 00:22:23.330 )") 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:23.330 { 00:22:23.330 "params": { 00:22:23.330 "name": "Nvme$subsystem", 00:22:23.330 "trtype": "$TEST_TRANSPORT", 00:22:23.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.330 "adrfam": "ipv4", 00:22:23.330 "trsvcid": "$NVMF_PORT", 00:22:23.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.330 "hdgst": ${hdgst:-false}, 00:22:23.330 "ddgst": ${ddgst:-false} 00:22:23.330 }, 00:22:23.330 "method": "bdev_nvme_attach_controller" 00:22:23.330 } 00:22:23.330 EOF 00:22:23.330 )") 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:23.330 { 00:22:23.330 "params": { 00:22:23.330 "name": "Nvme$subsystem", 00:22:23.330 "trtype": "$TEST_TRANSPORT", 00:22:23.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.330 "adrfam": "ipv4", 00:22:23.330 "trsvcid": "$NVMF_PORT", 00:22:23.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.330 "hdgst": ${hdgst:-false}, 00:22:23.330 "ddgst": ${ddgst:-false} 00:22:23.330 }, 00:22:23.330 "method": "bdev_nvme_attach_controller" 00:22:23.330 } 00:22:23.330 EOF 00:22:23.330 )") 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:23.330 { 00:22:23.330 "params": { 00:22:23.330 "name": "Nvme$subsystem", 00:22:23.330 "trtype": "$TEST_TRANSPORT", 00:22:23.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.330 "adrfam": "ipv4", 00:22:23.330 "trsvcid": "$NVMF_PORT", 00:22:23.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.330 "hdgst": ${hdgst:-false}, 00:22:23.330 "ddgst": ${ddgst:-false} 00:22:23.330 }, 00:22:23.330 "method": "bdev_nvme_attach_controller" 00:22:23.330 } 00:22:23.330 EOF 00:22:23.330 )") 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:23.330 { 00:22:23.330 "params": { 00:22:23.330 "name": "Nvme$subsystem", 00:22:23.330 "trtype": "$TEST_TRANSPORT", 00:22:23.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.330 "adrfam": "ipv4", 00:22:23.330 "trsvcid": "$NVMF_PORT", 00:22:23.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.330 "hdgst": ${hdgst:-false}, 00:22:23.330 "ddgst": ${ddgst:-false} 00:22:23.330 }, 00:22:23.330 "method": "bdev_nvme_attach_controller" 00:22:23.330 } 00:22:23.330 EOF 00:22:23.330 )") 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:23.330 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:23.330 { 00:22:23.330 "params": { 00:22:23.330 "name": "Nvme$subsystem", 00:22:23.330 "trtype": "$TEST_TRANSPORT", 00:22:23.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.331 "adrfam": "ipv4", 00:22:23.331 "trsvcid": "$NVMF_PORT", 00:22:23.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.331 "hdgst": ${hdgst:-false}, 00:22:23.331 "ddgst": ${ddgst:-false} 00:22:23.331 }, 00:22:23.331 "method": "bdev_nvme_attach_controller" 00:22:23.331 } 00:22:23.331 EOF 00:22:23.331 )") 00:22:23.331 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:23.331 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:23.331 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:23.331 { 00:22:23.331 "params": { 00:22:23.331 "name": "Nvme$subsystem", 00:22:23.331 "trtype": "$TEST_TRANSPORT", 00:22:23.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.331 "adrfam": "ipv4", 00:22:23.331 "trsvcid": "$NVMF_PORT", 00:22:23.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.331 "hdgst": ${hdgst:-false}, 00:22:23.331 "ddgst": ${ddgst:-false} 00:22:23.331 }, 00:22:23.331 "method": "bdev_nvme_attach_controller" 00:22:23.331 } 00:22:23.331 EOF 00:22:23.331 )") 00:22:23.331 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:23.331 [2024-12-05 21:15:31.305709] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:22:23.331 [2024-12-05 21:15:31.305762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1369318 ] 00:22:23.331 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:23.331 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:23.331 { 00:22:23.331 "params": { 00:22:23.331 "name": "Nvme$subsystem", 00:22:23.331 "trtype": "$TEST_TRANSPORT", 00:22:23.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.331 "adrfam": "ipv4", 00:22:23.331 "trsvcid": "$NVMF_PORT", 00:22:23.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.331 "hdgst": ${hdgst:-false}, 00:22:23.331 "ddgst": ${ddgst:-false} 00:22:23.331 }, 00:22:23.331 "method": "bdev_nvme_attach_controller" 00:22:23.331 } 00:22:23.331 EOF 00:22:23.331 )") 00:22:23.331 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:23.331 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:23.331 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:23.331 { 00:22:23.331 "params": { 00:22:23.331 "name": "Nvme$subsystem", 00:22:23.331 "trtype": "$TEST_TRANSPORT", 00:22:23.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.331 "adrfam": "ipv4", 00:22:23.331 "trsvcid": "$NVMF_PORT", 00:22:23.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.331 "hdgst": ${hdgst:-false}, 00:22:23.331 "ddgst": ${ddgst:-false} 00:22:23.331 }, 00:22:23.331 "method": "bdev_nvme_attach_controller" 00:22:23.331 } 00:22:23.331 EOF 00:22:23.331 )") 00:22:23.331 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:23.331 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:23.331 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:23.331 { 00:22:23.331 "params": { 00:22:23.331 "name": "Nvme$subsystem", 00:22:23.331 "trtype": "$TEST_TRANSPORT", 00:22:23.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.331 "adrfam": "ipv4", 00:22:23.331 "trsvcid": "$NVMF_PORT", 00:22:23.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.331 "hdgst": ${hdgst:-false}, 00:22:23.331 "ddgst": ${ddgst:-false} 00:22:23.331 }, 00:22:23.331 "method": "bdev_nvme_attach_controller" 00:22:23.331 } 00:22:23.331 EOF 00:22:23.331 )") 00:22:23.331 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:23.331 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:23.331 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:23.331 21:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:23.331 "params": { 00:22:23.331 "name": "Nvme1", 00:22:23.331 "trtype": "tcp", 00:22:23.331 "traddr": "10.0.0.2", 00:22:23.331 "adrfam": "ipv4", 00:22:23.331 "trsvcid": "4420", 00:22:23.331 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.331 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:23.331 "hdgst": false, 00:22:23.331 "ddgst": false 00:22:23.331 }, 00:22:23.331 "method": "bdev_nvme_attach_controller" 00:22:23.331 },{ 00:22:23.331 "params": { 00:22:23.331 "name": "Nvme2", 00:22:23.331 "trtype": "tcp", 00:22:23.331 "traddr": "10.0.0.2", 00:22:23.331 "adrfam": "ipv4", 00:22:23.331 "trsvcid": "4420", 00:22:23.331 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:23.331 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:23.331 "hdgst": false, 00:22:23.331 "ddgst": false 00:22:23.331 }, 00:22:23.331 "method": "bdev_nvme_attach_controller" 00:22:23.331 },{ 00:22:23.331 "params": { 00:22:23.331 "name": "Nvme3", 00:22:23.331 "trtype": "tcp", 00:22:23.331 "traddr": "10.0.0.2", 00:22:23.331 "adrfam": "ipv4", 00:22:23.331 "trsvcid": "4420", 00:22:23.331 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:23.331 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:23.331 "hdgst": false, 00:22:23.331 "ddgst": false 00:22:23.331 }, 00:22:23.331 "method": "bdev_nvme_attach_controller" 00:22:23.331 },{ 00:22:23.331 "params": { 00:22:23.331 "name": "Nvme4", 00:22:23.331 "trtype": "tcp", 00:22:23.331 "traddr": "10.0.0.2", 00:22:23.331 "adrfam": "ipv4", 00:22:23.331 "trsvcid": "4420", 00:22:23.331 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:23.331 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:23.331 "hdgst": false, 00:22:23.331 "ddgst": false 00:22:23.331 }, 00:22:23.331 "method": "bdev_nvme_attach_controller" 00:22:23.331 },{ 00:22:23.331 "params": { 00:22:23.331 "name": "Nvme5", 00:22:23.331 "trtype": "tcp", 00:22:23.331 "traddr": "10.0.0.2", 00:22:23.331 "adrfam": "ipv4", 00:22:23.331 "trsvcid": "4420", 00:22:23.331 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:23.331 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:23.331 "hdgst": false, 00:22:23.331 "ddgst": false 00:22:23.331 }, 00:22:23.331 "method": "bdev_nvme_attach_controller" 00:22:23.331 },{ 00:22:23.331 "params": { 00:22:23.331 "name": "Nvme6", 00:22:23.331 "trtype": "tcp", 00:22:23.331 "traddr": "10.0.0.2", 00:22:23.331 "adrfam": "ipv4", 00:22:23.331 "trsvcid": "4420", 00:22:23.331 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:23.331 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:23.331 "hdgst": false, 00:22:23.331 "ddgst": false 00:22:23.331 }, 00:22:23.331 "method": "bdev_nvme_attach_controller" 00:22:23.331 },{ 00:22:23.331 "params": { 00:22:23.331 "name": "Nvme7", 00:22:23.331 "trtype": "tcp", 00:22:23.331 "traddr": "10.0.0.2", 00:22:23.331 "adrfam": "ipv4", 00:22:23.331 "trsvcid": "4420", 00:22:23.331 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:23.331 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:23.331 "hdgst": false, 00:22:23.331 "ddgst": false 00:22:23.331 }, 00:22:23.331 "method": "bdev_nvme_attach_controller" 00:22:23.331 },{ 00:22:23.331 "params": { 00:22:23.331 "name": "Nvme8", 00:22:23.331 "trtype": "tcp", 00:22:23.331 "traddr": "10.0.0.2", 00:22:23.331 "adrfam": "ipv4", 00:22:23.331 "trsvcid": "4420", 00:22:23.331 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:23.331 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:23.331 "hdgst": false, 00:22:23.331 "ddgst": false 00:22:23.331 }, 00:22:23.331 "method": "bdev_nvme_attach_controller" 00:22:23.331 },{ 00:22:23.331 "params": { 00:22:23.331 "name": "Nvme9", 00:22:23.331 "trtype": "tcp", 00:22:23.331 "traddr": "10.0.0.2", 00:22:23.331 "adrfam": "ipv4", 00:22:23.331 "trsvcid": "4420", 00:22:23.331 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:23.331 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:23.331 "hdgst": false, 00:22:23.331 "ddgst": false 00:22:23.331 }, 00:22:23.331 "method": "bdev_nvme_attach_controller" 00:22:23.331 },{ 00:22:23.331 "params": { 00:22:23.331 "name": "Nvme10", 00:22:23.331 "trtype": "tcp", 00:22:23.331 "traddr": "10.0.0.2", 00:22:23.331 "adrfam": "ipv4", 00:22:23.331 "trsvcid": "4420", 00:22:23.331 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:23.331 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:23.331 "hdgst": false, 00:22:23.331 "ddgst": false 00:22:23.331 }, 00:22:23.331 "method": "bdev_nvme_attach_controller" 00:22:23.331 }' 00:22:23.331 [2024-12-05 21:15:31.383295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.331 [2024-12-05 21:15:31.424447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.230 Running I/O for 10 seconds... 00:22:25.230 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.230 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:25.230 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:25.230 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.230 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:25.489 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.489 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:25.489 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:25.489 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:25.489 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:25.489 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:25.489 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:25.489 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:25.489 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:25.489 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:25.489 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:25.489 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.489 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:25.489 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.489 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:25.489 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:25.489 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:25.748 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:25.748 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:25.748 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:25.748 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:25.748 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.748 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:25.748 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.748 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:25.748 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:25.748 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:26.013 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:26.013 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:26.013 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:26.013 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:26.013 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.013 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.013 21:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.013 21:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:26.013 21:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:26.013 21:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:26.013 21:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:26.013 21:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:26.013 21:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1369010 00:22:26.013 21:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1369010 ']' 00:22:26.013 21:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1369010 00:22:26.013 21:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:26.013 21:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:26.013 21:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1369010 00:22:26.013 21:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:26.013 21:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:26.013 21:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1369010' 00:22:26.013 killing process with pid 1369010 00:22:26.013 21:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1369010 00:22:26.013 21:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1369010 00:22:26.013 [2024-12-05 21:15:34.101745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.013 [2024-12-05 21:15:34.101799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.013 [2024-12-05 21:15:34.101808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.013 [2024-12-05 21:15:34.101815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.013 [2024-12-05 21:15:34.101821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.013 [2024-12-05 21:15:34.101828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.101993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.102196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daac0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.103057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dd690 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.103745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.103759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.103766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.103773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.103780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.014 [2024-12-05 21:15:34.103786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.103997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.104162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dafb0 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.105971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.105997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.106007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.106014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.106025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.106032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.106038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.106044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.106052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.106060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.015 [2024-12-05 21:15:34.106066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.106423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db970 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.107110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.107126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.107133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.107140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.107147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.107154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.107161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.107168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.107175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.016 [2024-12-05 21:15:34.107182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.107535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dbe40 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.109385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.109399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.109406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.109414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.109421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.109428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.109434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.109441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.109447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.017 [2024-12-05 21:15:34.109453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.109801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc330 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.110758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.110782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.110790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.110798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.110804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.110810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.110821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.110828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.110835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.110841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.110847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.110854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.110861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.110868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.018 [2024-12-05 21:15:34.110874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.110880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.110886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.110893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.110899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.110905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.110911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.110920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.110927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.110933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.110939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.110945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.110951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.110957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.110964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.110971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.110978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.110984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.110990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc800 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.111991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.112006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.112013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.112020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.019 [2024-12-05 21:15:34.112026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dccd0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dd1a0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.112998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dd1a0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.113006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dd1a0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.113013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dd1a0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.113020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dd1a0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.113027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dd1a0 is same with the state(6) to be set 00:22:26.020 [2024-12-05 21:15:34.113041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dd1a0 is same with the state(6) to be set 00:22:26.292 [2024-12-05 21:15:34.124749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.292 [2024-12-05 21:15:34.124790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.292 [2024-12-05 21:15:34.124800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.292 [2024-12-05 21:15:34.124807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.292 [2024-12-05 21:15:34.124815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.292 [2024-12-05 21:15:34.124823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.292 [2024-12-05 21:15:34.124830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.292 [2024-12-05 21:15:34.124837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.292 [2024-12-05 21:15:34.124844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0c610 is same with the state(6) to be set 00:22:26.293 [2024-12-05 21:15:34.124887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.124897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.124907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.124914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.124922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.124929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.124938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.124946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.124954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168830 is same with the state(6) to be set 00:22:26.293 [2024-12-05 21:15:34.124985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.124996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214ea60 is same with the state(6) to be set 00:22:26.293 [2024-12-05 21:15:34.125074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21226a0 is same with the state(6) to be set 00:22:26.293 [2024-12-05 21:15:34.125162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21228a0 is same with the state(6) to be set 00:22:26.293 [2024-12-05 21:15:34.125248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebe10 is same with the state(6) to be set 00:22:26.293 [2024-12-05 21:15:34.125332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf7dd0 is same with the state(6) to be set 00:22:26.293 [2024-12-05 21:15:34.125424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.293 [2024-12-05 21:15:34.125463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.293 [2024-12-05 21:15:34.125470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.294 [2024-12-05 21:15:34.125478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.125485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e550 is same with the state(6) to be set 00:22:26.294 [2024-12-05 21:15:34.125510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.294 [2024-12-05 21:15:34.125520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.125527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.294 [2024-12-05 21:15:34.125534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.125542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.294 [2024-12-05 21:15:34.125549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.125557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.294 [2024-12-05 21:15:34.125564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.125570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2123c80 is same with the state(6) to be set 00:22:26.294 [2024-12-05 21:15:34.125593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.294 [2024-12-05 21:15:34.125602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.125610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.294 [2024-12-05 21:15:34.125617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.125626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.294 [2024-12-05 21:15:34.125633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.125641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.294 [2024-12-05 21:15:34.125648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.125654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf7940 is same with the state(6) to be set 00:22:26.294 [2024-12-05 21:15:34.127201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.294 [2024-12-05 21:15:34.127585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.294 [2024-12-05 21:15:34.127593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.127990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.127998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.128006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.128016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.128023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.128031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.128040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.128048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.128058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.128067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.128074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.128082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.295 [2024-12-05 21:15:34.128090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.295 [2024-12-05 21:15:34.128099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:26.296 [2024-12-05 21:15:34.128387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.296 [2024-12-05 21:15:34.128683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.296 [2024-12-05 21:15:34.128690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.128708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.128725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.128741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.128757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.128773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.128789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.128805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.128821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.128837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.128852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.128869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.128885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.128902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.128919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.128936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.128952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.128969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.128985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.128995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.129002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.129011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.129017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.129027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.129034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.129042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.129049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.129058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.129065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.129074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.129081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.129090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.129097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.129106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.129114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.129123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.129134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.129143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.129150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.129160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.129168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.129176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.297 [2024-12-05 21:15:34.129184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.297 [2024-12-05 21:15:34.129193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fd500 is same with the state(6) to be set 00:22:26.298 [2024-12-05 21:15:34.129681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.298 [2024-12-05 21:15:34.129820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.298 [2024-12-05 21:15:34.129829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.129836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.129846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.129853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.129861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.129869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.129878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.129885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.129894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.129902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.129910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.129917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.129926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.129933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.129941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.129949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.129958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.129965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.129975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.129982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.129991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.129999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.299 [2024-12-05 21:15:34.130344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.299 [2024-12-05 21:15:34.130353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.130740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.300 [2024-12-05 21:15:34.130747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.300 [2024-12-05 21:15:34.134054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:26.300 [2024-12-05 21:15:34.134088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:26.300 [2024-12-05 21:15:34.134097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:26.300 [2024-12-05 21:15:34.134114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214ea60 (9): Bad file descriptor 00:22:26.300 [2024-12-05 21:15:34.134129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214e550 (9): Bad file descriptor 00:22:26.300 [2024-12-05 21:15:34.134139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf7940 (9): Bad file descriptor 00:22:26.300 [2024-12-05 21:15:34.134969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0c610 (9): Bad file descriptor 00:22:26.300 [2024-12-05 21:15:34.134999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2168830 (9): Bad file descriptor 00:22:26.300 [2024-12-05 21:15:34.135021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21226a0 (9): Bad file descriptor 00:22:26.300 [2024-12-05 21:15:34.135037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21228a0 (9): Bad file descriptor 00:22:26.300 [2024-12-05 21:15:34.135053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cebe10 (9): Bad file descriptor 00:22:26.300 [2024-12-05 21:15:34.135072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf7dd0 (9): Bad file descriptor 00:22:26.300 [2024-12-05 21:15:34.135089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2123c80 (9): Bad file descriptor 00:22:26.300 [2024-12-05 21:15:34.135195] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:26.300 [2024-12-05 21:15:34.135469] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:26.300 [2024-12-05 21:15:34.135522] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:26.300 [2024-12-05 21:15:34.135570] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:26.301 [2024-12-05 21:15:34.135620] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:26.301 [2024-12-05 21:15:34.135668] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:26.301 [2024-12-05 21:15:34.135923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.301 [2024-12-05 21:15:34.135939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf7940 with addr=10.0.0.2, port=4420 00:22:26.301 [2024-12-05 21:15:34.135947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf7940 is same with the state(6) to be set 00:22:26.301 [2024-12-05 21:15:34.136093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.301 [2024-12-05 21:15:34.136104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x214e550 with addr=10.0.0.2, port=4420 00:22:26.301 [2024-12-05 21:15:34.136112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e550 is same with the state(6) to be set 00:22:26.301 [2024-12-05 21:15:34.136194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.301 [2024-12-05 21:15:34.136204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x214ea60 with addr=10.0.0.2, port=4420 00:22:26.301 [2024-12-05 21:15:34.136214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214ea60 is same with the state(6) to be set 00:22:26.301 [2024-12-05 21:15:34.136244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.301 [2024-12-05 21:15:34.136693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.301 [2024-12-05 21:15:34.136700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.136708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.136715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.136725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.136731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.136740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.136748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.136757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.136763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.136772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.136780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.136790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.136798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.136807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.136814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.136823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.136830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.136839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.136846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.136855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.136862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.136871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.136878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.136887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.136895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.136904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.136911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.136921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.136927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.136936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.136943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.136953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.136959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.136968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.136976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.136984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.136993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.137003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.137010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.137019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.137026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.137036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.137044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.137052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.137059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.137068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.137076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.137085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.137092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.137101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.137108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.137117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.137125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.137134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.137141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.137150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.137157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.137166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.137174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.302 [2024-12-05 21:15:34.137183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.302 [2024-12-05 21:15:34.137190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.137201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.137208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.137218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.137225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.137233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.137241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.137250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.137257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.137267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.137274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.137283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.137289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.137298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.137305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.137313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.137322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.137331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2223500 is same with the state(6) to be set 00:22:26.303 [2024-12-05 21:15:34.137492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf7940 (9): Bad file descriptor 00:22:26.303 [2024-12-05 21:15:34.137509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214e550 (9): Bad file descriptor 00:22:26.303 [2024-12-05 21:15:34.137518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214ea60 (9): Bad file descriptor 00:22:26.303 [2024-12-05 21:15:34.138454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:26.303 [2024-12-05 21:15:34.138481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:26.303 [2024-12-05 21:15:34.138489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:26.303 [2024-12-05 21:15:34.138498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:26.303 [2024-12-05 21:15:34.138507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:26.303 [2024-12-05 21:15:34.138516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:26.303 [2024-12-05 21:15:34.138529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:26.303 [2024-12-05 21:15:34.138536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:26.303 [2024-12-05 21:15:34.138543] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:26.303 [2024-12-05 21:15:34.138550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:26.303 [2024-12-05 21:15:34.138557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:26.303 [2024-12-05 21:15:34.138564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:26.303 [2024-12-05 21:15:34.138571] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:26.303 [2024-12-05 21:15:34.138723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.303 [2024-12-05 21:15:34.138737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf7dd0 with addr=10.0.0.2, port=4420 00:22:26.303 [2024-12-05 21:15:34.138746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf7dd0 is same with the state(6) to be set 00:22:26.303 [2024-12-05 21:15:34.138996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf7dd0 (9): Bad file descriptor 00:22:26.303 [2024-12-05 21:15:34.139035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:26.303 [2024-12-05 21:15:34.139042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:26.303 [2024-12-05 21:15:34.139051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:26.303 [2024-12-05 21:15:34.139058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:26.303 [2024-12-05 21:15:34.145099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.145119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.145131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.145139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.145148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.145155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.145165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.145172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.145181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.145188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.145197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.145204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.145217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.145224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.145232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.145240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.145249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.145256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.145265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.145272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.145281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.303 [2024-12-05 21:15:34.145289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.303 [2024-12-05 21:15:34.145298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.304 [2024-12-05 21:15:34.145716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.304 [2024-12-05 21:15:34.145724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.145733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.145741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.145748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.145756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.145763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.145774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.145782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.145789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.145797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.145806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.145813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.145822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.145829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.145840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.145847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.145855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.145862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.145871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.145877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.145886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.145893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.145902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.145909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.145917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.145925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.145933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.145940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.145949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.145956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.145965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.145971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.145980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.145987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.145995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.146002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.146011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.146018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.146028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.146037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.146046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.146053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.146061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.146068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.146076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.146083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.146091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.146098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.146107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.146114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.146122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.146130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.146139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.146145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.146153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203c210 is same with the state(6) to be set 00:22:26.305 [2024-12-05 21:15:34.147127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.147142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.147152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.147160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.305 [2024-12-05 21:15:34.147170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.305 [2024-12-05 21:15:34.147178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.306 [2024-12-05 21:15:34.147684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.306 [2024-12-05 21:15:34.147691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.147700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.147707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.147715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.147722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.147731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.147737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.147746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.147753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.147761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.147768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.147777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.147785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.147793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.147800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.147811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.147819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.147829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.147837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.147845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.147855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.147869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.147879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.147891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.147899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.147908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.147916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.147924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.147932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.147941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.147948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.147956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.147963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.147972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.147979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.147988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.147995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.148004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.148010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.148019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.148026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.148035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.148044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.148054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.148062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.148079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.148087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.148096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.148104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.148113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.148120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.148129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.148137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.148146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.148152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.148161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.148168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.148177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.148184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.148193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.148200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.148208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f63b0 is same with the state(6) to be set 00:22:26.307 [2024-12-05 21:15:34.149202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.307 [2024-12-05 21:15:34.149221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.307 [2024-12-05 21:15:34.149232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.308 [2024-12-05 21:15:34.149779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.308 [2024-12-05 21:15:34.149787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.149796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.149802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.149812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.149819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.149828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.149834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.149843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.149850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.149859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.149867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.149875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.149883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.149891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.149898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.149909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.149916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.149925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.149932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.149941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.149950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.149959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.149966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.149982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.149992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.150004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.150015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.150025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.150033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.150043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.150051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.150060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.150068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.150077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.150084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.150093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.150102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.150111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.150118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.150127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.150139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.150148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.150157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.150166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.150173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.150183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.150190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.150200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.150207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.150215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.150223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.150232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.150239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.150248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.150256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.150265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.150271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.150280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.309 [2024-12-05 21:15:34.150287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.309 [2024-12-05 21:15:34.150296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.150303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.150312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.150319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.150328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7650 is same with the state(6) to be set 00:22:26.310 [2024-12-05 21:15:34.151304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.310 [2024-12-05 21:15:34.151708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.310 [2024-12-05 21:15:34.151716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.151724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.151733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.151741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.151749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.151756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.151767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.151774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.151783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.151790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.151798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.151805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.151814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.151821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.151830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.151838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.151848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.151856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.151865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.151872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.151881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.151888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.151896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.151904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.151912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.151920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.151929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.151937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.151945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.151953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.151962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.151972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.151981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.151989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.151998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.152006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.152014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.152022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.152030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.152038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.152047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.152054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.152063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.152070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.152078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.152085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.152094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.152101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.152110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.152117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.152125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.152132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.152141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.152147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.152156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.152163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.152173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.152180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.152189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.152197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.311 [2024-12-05 21:15:34.152206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.311 [2024-12-05 21:15:34.152213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.152221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.152228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.152237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.152244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.152253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.152260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.152270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.152278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.152287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.152294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.152303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.152309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.152318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.152325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.152334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.152341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.152350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.152357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.152365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.152380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.152387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8910 is same with the state(6) to be set 00:22:26.312 [2024-12-05 21:15:34.153385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.312 [2024-12-05 21:15:34.153745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.312 [2024-12-05 21:15:34.153754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.153761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.153771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.153777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.153787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.153793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.153805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.153812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.153821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.153827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.153837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.153844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.153853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.153860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.153869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.153876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.153886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.153894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.153903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.153910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.153919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.153926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.153935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.153942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.153951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.153958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.153968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.153975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.153984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.153993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.154002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.154012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.154021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.154029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.154037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.154045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.154054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.154061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.154070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.154077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.154086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.154093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.154102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.154109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.154118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.154125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.154134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.154141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.154150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.154158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.154167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.154174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.154182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.154189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.154198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.154205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.154217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.154226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.154235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.154243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.154252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.154258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.154267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.154275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.313 [2024-12-05 21:15:34.154284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.313 [2024-12-05 21:15:34.154292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.154301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.154309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.154318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.154326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.154335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.154343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.154352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.154359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.154373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.154381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.154389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.154397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.154406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.154414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.154422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.154432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.154440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.154448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.154456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.154464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.154471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f9c20 is same with the state(6) to be set 00:22:26.314 [2024-12-05 21:15:34.155455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.314 [2024-12-05 21:15:34.155791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.314 [2024-12-05 21:15:34.155798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.155807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.155814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.155825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.155832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.155840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.155848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.155856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.155863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.155874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.155881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.155890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.155897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.155906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.155913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.155922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.155929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.155937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.155944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.155952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.155959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.155968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.155975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.155984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.155991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.156000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.156006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.156015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.156023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.156032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.156040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.156048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.156055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.156064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.156072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.156080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.156088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.156096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.156104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.156113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.156120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.156128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.156135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.156144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.156151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.156160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.156167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.156177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.156183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.156192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.156199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.156207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.315 [2024-12-05 21:15:34.156214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.315 [2024-12-05 21:15:34.156228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.316 [2024-12-05 21:15:34.156235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.316 [2024-12-05 21:15:34.156245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.316 [2024-12-05 21:15:34.156253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.316 [2024-12-05 21:15:34.156261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.316 [2024-12-05 21:15:34.156269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.316 [2024-12-05 21:15:34.156277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.316 [2024-12-05 21:15:34.156285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.316 [2024-12-05 21:15:34.156294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.316 [2024-12-05 21:15:34.156301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.316 [2024-12-05 21:15:34.156310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.316 [2024-12-05 21:15:34.156317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.316 [2024-12-05 21:15:34.156325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.316 [2024-12-05 21:15:34.156332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.316 [2024-12-05 21:15:34.156341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.316 [2024-12-05 21:15:34.156348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.316 [2024-12-05 21:15:34.156356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.316 [2024-12-05 21:15:34.156364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.316 [2024-12-05 21:15:34.156378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.316 [2024-12-05 21:15:34.156386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.316 [2024-12-05 21:15:34.156412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.316 [2024-12-05 21:15:34.156420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.316 [2024-12-05 21:15:34.156429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.316 [2024-12-05 21:15:34.156436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.316 [2024-12-05 21:15:34.156445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.316 [2024-12-05 21:15:34.156453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.316 [2024-12-05 21:15:34.156462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.316 [2024-12-05 21:15:34.156470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.316 [2024-12-05 21:15:34.156480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.316 [2024-12-05 21:15:34.156487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.316 [2024-12-05 21:15:34.156496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.316 [2024-12-05 21:15:34.156503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.316 [2024-12-05 21:15:34.156513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.316 [2024-12-05 21:15:34.156520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.316 [2024-12-05 21:15:34.156528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20faf30 is same with the state(6) to be set 00:22:26.316 [2024-12-05 21:15:34.157496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:26.316 [2024-12-05 21:15:34.157515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:26.316 [2024-12-05 21:15:34.157565] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:26.316 [2024-12-05 21:15:34.157579] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:22:26.316 [2024-12-05 21:15:34.157590] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:22:26.316 [2024-12-05 21:15:34.157601] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:22:26.316 [2024-12-05 21:15:34.157615] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:26.316 [2024-12-05 21:15:34.157626] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:22:26.316 [2024-12-05 21:15:34.157637] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:26.316 [2024-12-05 21:15:34.157697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:26.316 [2024-12-05 21:15:34.157709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:26.316 [2024-12-05 21:15:34.157719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:26.316 task offset: 24576 on job bdev=Nvme9n1 fails 00:22:26.316 00:22:26.316 Latency(us) 00:22:26.316 [2024-12-05T20:15:34.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.316 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.316 Job: Nvme1n1 ended in about 0.90 seconds with error 00:22:26.316 Verification LBA range: start 0x0 length 0x400 00:22:26.316 Nvme1n1 : 0.90 212.57 13.29 70.86 0.00 223455.21 11983.73 218702.99 00:22:26.316 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.316 Job: Nvme2n1 ended in about 0.91 seconds with error 00:22:26.316 Verification LBA range: start 0x0 length 0x400 00:22:26.316 Nvme2n1 : 0.91 210.56 13.16 70.19 0.00 221708.43 18974.23 189742.32 00:22:26.316 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.316 Job: Nvme3n1 ended in about 0.90 seconds with error 00:22:26.316 Verification LBA range: start 0x0 length 0x400 00:22:26.316 Nvme3n1 : 0.90 213.66 13.35 71.22 0.00 214571.70 6959.30 221698.93 00:22:26.316 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.316 Job: Nvme4n1 ended in about 0.91 seconds with error 00:22:26.316 Verification LBA range: start 0x0 length 0x400 00:22:26.316 Nvme4n1 : 0.91 215.56 13.47 70.03 0.00 210344.85 12670.29 226692.14 00:22:26.316 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.316 Job: Nvme5n1 ended in about 0.92 seconds with error 00:22:26.316 Verification LBA range: start 0x0 length 0x400 00:22:26.316 Nvme5n1 : 0.92 209.61 13.10 69.87 0.00 211157.58 17101.78 243669.09 00:22:26.316 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.316 Job: Nvme6n1 ended in about 0.92 seconds with error 00:22:26.316 Verification LBA range: start 0x0 length 0x400 00:22:26.316 Nvme6n1 : 0.92 213.50 13.34 69.71 0.00 204619.72 6428.77 212711.13 00:22:26.316 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.316 Job: Nvme7n1 ended in about 0.92 seconds with error 00:22:26.316 Verification LBA range: start 0x0 length 0x400 00:22:26.316 Nvme7n1 : 0.92 208.67 13.04 69.56 0.00 204451.11 15666.22 208716.56 00:22:26.317 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.317 Job: Nvme8n1 ended in about 0.92 seconds with error 00:22:26.317 Verification LBA range: start 0x0 length 0x400 00:22:26.317 Nvme8n1 : 0.92 208.20 13.01 69.40 0.00 201168.34 15915.89 199728.76 00:22:26.317 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.317 Job: Nvme9n1 ended in about 0.90 seconds with error 00:22:26.317 Verification LBA range: start 0x0 length 0x400 00:22:26.317 Nvme9n1 : 0.90 214.11 13.38 71.37 0.00 191019.03 4275.44 240673.16 00:22:26.317 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.317 Job: Nvme10n1 ended in about 0.90 seconds with error 00:22:26.317 Verification LBA range: start 0x0 length 0x400 00:22:26.317 Nvme10n1 : 0.90 213.91 13.37 71.30 0.00 187464.66 7489.83 219701.64 00:22:26.317 [2024-12-05T20:15:34.425Z] =================================================================================================================== 00:22:26.317 [2024-12-05T20:15:34.425Z] Total : 2120.36 132.52 703.51 0.00 206998.88 4275.44 243669.09 00:22:26.317 [2024-12-05 21:15:34.190097] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:26.317 [2024-12-05 21:15:34.190144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:26.317 [2024-12-05 21:15:34.190167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:26.317 [2024-12-05 21:15:34.190465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.317 [2024-12-05 21:15:34.190484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cebe10 with addr=10.0.0.2, port=4420 00:22:26.317 [2024-12-05 21:15:34.190494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebe10 is same with the state(6) to be set 00:22:26.317 [2024-12-05 21:15:34.190599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.317 [2024-12-05 21:15:34.190611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2123c80 with addr=10.0.0.2, port=4420 00:22:26.317 [2024-12-05 21:15:34.190619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2123c80 is same with the state(6) to be set 00:22:26.317 [2024-12-05 21:15:34.191963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:26.317 [2024-12-05 21:15:34.191982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:26.317 [2024-12-05 21:15:34.192209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.317 [2024-12-05 21:15:34.192224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21228a0 with addr=10.0.0.2, port=4420 00:22:26.317 [2024-12-05 21:15:34.192232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21228a0 is same with the state(6) to be set 00:22:26.317 [2024-12-05 21:15:34.192326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.317 [2024-12-05 21:15:34.192338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21226a0 with addr=10.0.0.2, port=4420 00:22:26.317 [2024-12-05 21:15:34.192345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21226a0 is same with the state(6) to be set 00:22:26.317 [2024-12-05 21:15:34.192506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.317 [2024-12-05 21:15:34.192519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0c610 with addr=10.0.0.2, port=4420 00:22:26.317 [2024-12-05 21:15:34.192527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0c610 is same with the state(6) to be set 00:22:26.317 [2024-12-05 21:15:34.192628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.317 [2024-12-05 21:15:34.192638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2168830 with addr=10.0.0.2, port=4420 00:22:26.317 [2024-12-05 21:15:34.192646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168830 is same with the state(6) to be set 00:22:26.317 [2024-12-05 21:15:34.192860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.317 [2024-12-05 21:15:34.192872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x214ea60 with addr=10.0.0.2, port=4420 00:22:26.317 [2024-12-05 21:15:34.192879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214ea60 is same with the state(6) to be set 00:22:26.317 [2024-12-05 21:15:34.192891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cebe10 (9): Bad file descriptor 00:22:26.317 [2024-12-05 21:15:34.192902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2123c80 (9): Bad file descriptor 00:22:26.317 [2024-12-05 21:15:34.192930] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:26.317 [2024-12-05 21:15:34.192949] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:22:26.317 [2024-12-05 21:15:34.192960] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:22:26.317 [2024-12-05 21:15:34.193018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:26.317 [2024-12-05 21:15:34.193181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.317 [2024-12-05 21:15:34.193193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x214e550 with addr=10.0.0.2, port=4420 00:22:26.317 [2024-12-05 21:15:34.193201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e550 is same with the state(6) to be set 00:22:26.317 [2024-12-05 21:15:34.193428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.317 [2024-12-05 21:15:34.193441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf7940 with addr=10.0.0.2, port=4420 00:22:26.317 [2024-12-05 21:15:34.193448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf7940 is same with the state(6) to be set 00:22:26.317 [2024-12-05 21:15:34.193457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21228a0 (9): Bad file descriptor 00:22:26.317 [2024-12-05 21:15:34.193466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21226a0 (9): Bad file descriptor 00:22:26.317 [2024-12-05 21:15:34.193480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0c610 (9): Bad file descriptor 00:22:26.317 [2024-12-05 21:15:34.193488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2168830 (9): Bad file descriptor 00:22:26.317 [2024-12-05 21:15:34.193497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214ea60 (9): Bad file descriptor 00:22:26.317 [2024-12-05 21:15:34.193505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:26.317 [2024-12-05 21:15:34.193511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:26.317 [2024-12-05 21:15:34.193520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:26.317 [2024-12-05 21:15:34.193529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:26.317 [2024-12-05 21:15:34.193538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:26.317 [2024-12-05 21:15:34.193544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:26.317 [2024-12-05 21:15:34.193551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:26.317 [2024-12-05 21:15:34.193557] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:26.317 [2024-12-05 21:15:34.193767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.317 [2024-12-05 21:15:34.193781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf7dd0 with addr=10.0.0.2, port=4420 00:22:26.317 [2024-12-05 21:15:34.193789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf7dd0 is same with the state(6) to be set 00:22:26.317 [2024-12-05 21:15:34.193799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214e550 (9): Bad file descriptor 00:22:26.317 [2024-12-05 21:15:34.193808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf7940 (9): Bad file descriptor 00:22:26.317 [2024-12-05 21:15:34.193816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:26.317 [2024-12-05 21:15:34.193822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:26.318 [2024-12-05 21:15:34.193830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:26.318 [2024-12-05 21:15:34.193836] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:26.318 [2024-12-05 21:15:34.193845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:26.318 [2024-12-05 21:15:34.193852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:26.318 [2024-12-05 21:15:34.193859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:26.318 [2024-12-05 21:15:34.193865] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:26.318 [2024-12-05 21:15:34.193872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:26.318 [2024-12-05 21:15:34.193880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:26.318 [2024-12-05 21:15:34.193886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:26.318 [2024-12-05 21:15:34.193893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:26.318 [2024-12-05 21:15:34.193903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:26.318 [2024-12-05 21:15:34.193909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:26.318 [2024-12-05 21:15:34.193916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:26.318 [2024-12-05 21:15:34.193922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:26.318 [2024-12-05 21:15:34.193929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:26.318 [2024-12-05 21:15:34.193936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:26.318 [2024-12-05 21:15:34.193942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:26.318 [2024-12-05 21:15:34.193949] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:26.318 [2024-12-05 21:15:34.193974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf7dd0 (9): Bad file descriptor 00:22:26.318 [2024-12-05 21:15:34.193984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:26.318 [2024-12-05 21:15:34.193990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:26.318 [2024-12-05 21:15:34.193997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:26.318 [2024-12-05 21:15:34.194003] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:26.318 [2024-12-05 21:15:34.194010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:26.318 [2024-12-05 21:15:34.194016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:26.318 [2024-12-05 21:15:34.194023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:26.318 [2024-12-05 21:15:34.194029] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:26.318 [2024-12-05 21:15:34.194051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:26.318 [2024-12-05 21:15:34.194059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:26.318 [2024-12-05 21:15:34.194066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:26.318 [2024-12-05 21:15:34.194072] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:26.578 21:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1369318 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1369318 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1369318 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:27.517 rmmod nvme_tcp 00:22:27.517 rmmod nvme_fabrics 00:22:27.517 rmmod nvme_keyring 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1369010 ']' 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1369010 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1369010 ']' 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1369010 00:22:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1369010) - No such process 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1369010 is not found' 00:22:27.517 Process with pid 1369010 is not found 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.517 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:30.052 00:22:30.052 real 0m8.200s 00:22:30.052 user 0m21.124s 00:22:30.052 sys 0m1.409s 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.052 ************************************ 00:22:30.052 END TEST nvmf_shutdown_tc3 00:22:30.052 ************************************ 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:30.052 ************************************ 00:22:30.052 START TEST nvmf_shutdown_tc4 00:22:30.052 ************************************ 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:30.052 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:30.053 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:30.053 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:30.053 Found net devices under 0000:86:00.0: cvl_0_0 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:30.053 Found net devices under 0000:86:00.1: cvl_0_1 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.053 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.053 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.053 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:30.053 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:30.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:22:30.053 00:22:30.053 --- 10.0.0.2 ping statistics --- 00:22:30.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.053 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:22:30.053 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:22:30.053 00:22:30.053 --- 10.0.0.1 ping statistics --- 00:22:30.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.053 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:22:30.053 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.053 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:30.053 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:30.053 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.053 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:30.053 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:30.053 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.054 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:30.054 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:30.054 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:30.054 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:30.054 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:30.054 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:30.054 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1370426 00:22:30.054 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1370426 00:22:30.054 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:30.054 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1370426 ']' 00:22:30.054 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.054 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.054 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.054 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.054 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:30.054 [2024-12-05 21:15:38.123680] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:22:30.054 [2024-12-05 21:15:38.123729] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.313 [2024-12-05 21:15:38.203311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:30.313 [2024-12-05 21:15:38.245599] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.313 [2024-12-05 21:15:38.245640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.313 [2024-12-05 21:15:38.245648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.313 [2024-12-05 21:15:38.245654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.313 [2024-12-05 21:15:38.245663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.313 [2024-12-05 21:15:38.247238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.313 [2024-12-05 21:15:38.247346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:30.313 [2024-12-05 21:15:38.247452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.313 [2024-12-05 21:15:38.247452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:30.313 [2024-12-05 21:15:38.385754] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.313 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:30.572 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.572 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:30.572 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.572 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:30.572 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.572 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:30.572 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.572 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:30.572 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.572 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:30.572 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:30.572 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.572 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:30.572 Malloc1 00:22:30.572 [2024-12-05 21:15:38.497218] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.572 Malloc2 00:22:30.572 Malloc3 00:22:30.572 Malloc4 00:22:30.572 Malloc5 00:22:30.831 Malloc6 00:22:30.831 Malloc7 00:22:30.831 Malloc8 00:22:30.831 Malloc9 00:22:30.831 Malloc10 00:22:30.831 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.831 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:30.831 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:30.831 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:30.831 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1370698 00:22:30.831 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:30.831 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:31.089 [2024-12-05 21:15:38.993011] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:36.366 21:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:36.366 21:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1370426 00:22:36.366 21:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1370426 ']' 00:22:36.366 21:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1370426 00:22:36.366 21:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:36.366 21:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:36.366 21:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1370426 00:22:36.366 21:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:36.366 21:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:36.366 21:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1370426' 00:22:36.366 killing process with pid 1370426 00:22:36.366 21:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1370426 00:22:36.366 21:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1370426 00:22:36.366 [2024-12-05 21:15:43.993394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x626ee0 is same with the state(6) to be set 00:22:36.366 [2024-12-05 21:15:43.993454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x626ee0 is same with the state(6) to be set 00:22:36.366 [2024-12-05 21:15:43.993463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x626ee0 is same with the state(6) to be set 00:22:36.366 [2024-12-05 21:15:43.993469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x626ee0 is same with the state(6) to be set 00:22:36.366 [2024-12-05 21:15:43.993476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x626ee0 is same with the state(6) to be set 00:22:36.366 [2024-12-05 21:15:43.993483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x626ee0 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:43.994177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6273b0 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:43.994210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6273b0 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:43.994218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6273b0 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:43.994225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6273b0 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:43.994232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6273b0 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:43.994239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6273b0 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:43.994842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87dad0 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:43.994878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87dad0 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:43.994886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87dad0 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:43.994893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87dad0 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:43.994900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87dad0 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:43.994907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87dad0 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:43.994914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87dad0 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:43.994920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87dad0 is same with the state(6) to be set 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 [2024-12-05 21:15:44.000509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61f9e0 is same with tstarting I/O failed: -6 00:22:36.367 he state(6) to be set 00:22:36.367 [2024-12-05 21:15:44.000535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61f9e0 is same with tWrite completed with error (sct=0, sc=8) 00:22:36.367 he state(6) to be set 00:22:36.367 [2024-12-05 21:15:44.000544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61f9e0 is same with the state(6) to be set 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 [2024-12-05 21:15:44.000551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61f9e0 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:44.000559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61f9e0 is same with the state(6) to be set 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 [2024-12-05 21:15:44.000566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61f9e0 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:44.000573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61f9e0 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:44.000600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:36.367 starting I/O failed: -6 00:22:36.367 starting I/O failed: -6 00:22:36.367 starting I/O failed: -6 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 [2024-12-05 21:15:44.001126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61fed0 is same with the state(6) to be set 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 [2024-12-05 21:15:44.001148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61fed0 is same with the state(6) to be set 00:22:36.367 starting I/O failed: -6 00:22:36.367 [2024-12-05 21:15:44.001156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61fed0 is same with the state(6) to be set 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 [2024-12-05 21:15:44.001400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6203c0 is same with the state(6) to be set 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 [2024-12-05 21:15:44.001421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6203c0 is same with the state(6) to be set 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 [2024-12-05 21:15:44.001428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6203c0 is same with the state(6) to be set 00:22:36.367 starting I/O failed: -6 00:22:36.367 [2024-12-05 21:15:44.001435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6203c0 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:44.001442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6203c0 is same with the state(6) to be set 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 [2024-12-05 21:15:44.001448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6203c0 is same with the state(6) to be set 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 [2024-12-05 21:15:44.001603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 starting I/O failed: -6 00:22:36.367 [2024-12-05 21:15:44.001880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800670 is same with the state(6) to be set 00:22:36.367 Write completed with error (sct=0, sc=8) 00:22:36.367 [2024-12-05 21:15:44.001905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800670 is same with the state(6) to be set 00:22:36.367 starting I/O failed: -6 00:22:36.367 [2024-12-05 21:15:44.001914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800670 is same with the state(6) to be set 00:22:36.367 [2024-12-05 21:15:44.001921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800670 is same with tWrite completed with error (sct=0, sc=8) 00:22:36.367 he state(6) to be set 00:22:36.367 starting I/O failed: -6 00:22:36.367 [2024-12-05 21:15:44.001928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800670 is same with the state(6) to be set 00:22:36.368 [2024-12-05 21:15:44.001935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800670 is same with the state(6) to be set 00:22:36.368 [2024-12-05 21:15:44.001941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800670 is same with the state(6) to be set 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 [2024-12-05 21:15:44.001949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800670 is same with the state(6) to be set 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 [2024-12-05 21:15:44.001957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800670 is same with the state(6) to be set 00:22:36.368 starting I/O failed: -6 00:22:36.368 [2024-12-05 21:15:44.001964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800670 is same with the state(6) to be set 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 [2024-12-05 21:15:44.002441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620d80 is same with tWrite completed with error (sct=0, sc=8) 00:22:36.368 he state(6) to be set 00:22:36.368 starting I/O failed: -6 00:22:36.368 [2024-12-05 21:15:44.002456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620d80 is same with the state(6) to be set 00:22:36.368 [2024-12-05 21:15:44.002463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620d80 is same with the state(6) to be set 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 [2024-12-05 21:15:44.002470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620d80 is same with the state(6) to be set 00:22:36.368 [2024-12-05 21:15:44.002477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620d80 is same with the state(6) to be set 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 [2024-12-05 21:15:44.002484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620d80 is same with tstarting I/O failed: -6 00:22:36.368 he state(6) to be set 00:22:36.368 [2024-12-05 21:15:44.002491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620d80 is same with the state(6) to be set 00:22:36.368 [2024-12-05 21:15:44.002498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620d80 is same with the state(6) to be set 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 [2024-12-05 21:15:44.002504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620d80 is same with the state(6) to be set 00:22:36.368 starting I/O failed: -6 00:22:36.368 [2024-12-05 21:15:44.002510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620d80 is same with the state(6) to be set 00:22:36.368 [2024-12-05 21:15:44.002516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620d80 is same with the state(6) to be set 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 [2024-12-05 21:15:44.002523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620d80 is same with the state(6) to be set 00:22:36.368 starting I/O failed: -6 00:22:36.368 [2024-12-05 21:15:44.002530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620d80 is same with the state(6) to be set 00:22:36.368 [2024-12-05 21:15:44.002537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620d80 is same with the state(6) to be set 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 [2024-12-05 21:15:44.002544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620d80 is same with the state(6) to be set 00:22:36.368 [2024-12-05 21:15:44.002550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620d80 is same with tWrite completed with error (sct=0, sc=8) 00:22:36.368 he state(6) to be set 00:22:36.368 starting I/O failed: -6 00:22:36.368 [2024-12-05 21:15:44.002562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620d80 is same with the state(6) to be set 00:22:36.368 [2024-12-05 21:15:44.002569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620d80 is same with the state(6) to be set 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 [2024-12-05 21:15:44.002607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 [2024-12-05 21:15:44.003021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621250 is same with the state(6) to be set 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 [2024-12-05 21:15:44.003040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621250 is same with the state(6) to be set 00:22:36.368 [2024-12-05 21:15:44.003047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621250 is same with the state(6) to be set 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 [2024-12-05 21:15:44.003054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621250 is same with the state(6) to be set 00:22:36.368 starting I/O failed: -6 00:22:36.368 [2024-12-05 21:15:44.003061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621250 is same with the state(6) to be set 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 starting I/O failed: -6 00:22:36.368 [2024-12-05 21:15:44.003353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621740 is same with the state(6) to be set 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 [2024-12-05 21:15:44.003376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621740 is same with the state(6) to be set 00:22:36.368 starting I/O failed: -6 00:22:36.368 [2024-12-05 21:15:44.003384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621740 is same with the state(6) to be set 00:22:36.368 [2024-12-05 21:15:44.003395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621740 is same with the state(6) to be set 00:22:36.368 Write completed with error (sct=0, sc=8) 00:22:36.368 [2024-12-05 21:15:44.003401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621740 is same with tstarting I/O failed: -6 00:22:36.368 he state(6) to be set 00:22:36.368 [2024-12-05 21:15:44.003409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621740 is same with the state(6) to be set 00:22:36.369 [2024-12-05 21:15:44.003415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621740 is same with the state(6) to be set 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 [2024-12-05 21:15:44.003421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621740 is same with the state(6) to be set 00:22:36.369 starting I/O failed: -6 00:22:36.369 [2024-12-05 21:15:44.003427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621740 is same with the state(6) to be set 00:22:36.369 [2024-12-05 21:15:44.003435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621740 is same with the state(6) to be set 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 [2024-12-05 21:15:44.003719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6208b0 is same with the state(6) to be set 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 [2024-12-05 21:15:44.003733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6208b0 is same with the state(6) to be set 00:22:36.369 starting I/O failed: -6 00:22:36.369 [2024-12-05 21:15:44.003741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6208b0 is same with the state(6) to be set 00:22:36.369 [2024-12-05 21:15:44.003748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6208b0 is same with the state(6) to be set 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 [2024-12-05 21:15:44.003755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6208b0 is same with the state(6) to be set 00:22:36.369 starting I/O failed: -6 00:22:36.369 [2024-12-05 21:15:44.003762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6208b0 is same with the state(6) to be set 00:22:36.369 [2024-12-05 21:15:44.003768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6208b0 is same with the state(6) to be set 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 [2024-12-05 21:15:44.004164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.369 NVMe io qpair process completion error 00:22:36.369 [2024-12-05 21:15:44.004300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x880e70 is same with the state(6) to be set 00:22:36.369 [2024-12-05 21:15:44.004316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x880e70 is same with the state(6) to be set 00:22:36.369 [2024-12-05 21:15:44.004323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x880e70 is same with the state(6) to be set 00:22:36.369 [2024-12-05 21:15:44.004329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x880e70 is same with the state(6) to be set 00:22:36.369 [2024-12-05 21:15:44.004336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x880e70 is same with the state(6) to be set 00:22:36.369 [2024-12-05 21:15:44.004342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x880e70 is same with the state(6) to be set 00:22:36.369 [2024-12-05 21:15:44.004349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x880e70 is same with the state(6) to be set 00:22:36.369 [2024-12-05 21:15:44.004356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x880e70 is same with the state(6) to be set 00:22:36.369 [2024-12-05 21:15:44.008650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x624d10 is same with the state(6) to be set 00:22:36.369 [2024-12-05 21:15:44.008673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x624d10 is same with the state(6) to be set 00:22:36.369 [2024-12-05 21:15:44.008680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x624d10 is same with the state(6) to be set 00:22:36.369 [2024-12-05 21:15:44.008687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x624d10 is same with the state(6) to be set 00:22:36.369 [2024-12-05 21:15:44.008694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x624d10 is same with the state(6) to be set 00:22:36.369 [2024-12-05 21:15:44.008701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x624d10 is same with the state(6) to be set 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 starting I/O failed: -6 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 [2024-12-05 21:15:44.010445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.369 NVMe io qpair process completion error 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.369 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 [2024-12-05 21:15:44.011337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6234a0 is same with the state(6) to be set 00:22:36.370 [2024-12-05 21:15:44.011360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6234a0 is same with t[2024-12-05 21:15:44.011355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ he state(6) to be set 00:22:36.370 transport error -6 (No such device or address) on qpair id 4 00:22:36.370 [2024-12-05 21:15:44.011373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6234a0 is same with the state(6) to be set 00:22:36.370 [2024-12-05 21:15:44.011385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6234a0 is same with the state(6) to be set 00:22:36.370 starting I/O failed: -6 00:22:36.370 [2024-12-05 21:15:44.011392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6234a0 is same with the state(6) to be set 00:22:36.370 [2024-12-05 21:15:44.011399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6234a0 is same with the state(6) to be set 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 [2024-12-05 21:15:44.011682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623990 is same with tWrite completed with error (sct=0, sc=8) 00:22:36.370 he state(6) to be set 00:22:36.370 [2024-12-05 21:15:44.011700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623990 is same with the state(6) to be set 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 [2024-12-05 21:15:44.011707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623990 is same with the state(6) to be set 00:22:36.370 starting I/O failed: -6 00:22:36.370 [2024-12-05 21:15:44.011715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623990 is same with the state(6) to be set 00:22:36.370 [2024-12-05 21:15:44.011722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623990 is same with the state(6) to be set 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 [2024-12-05 21:15:44.011728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623990 is same with the state(6) to be set 00:22:36.370 [2024-12-05 21:15:44.011735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623990 is same with the state(6) to be set 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 [2024-12-05 21:15:44.011741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623990 is same with the state(6) to be set 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 [2024-12-05 21:15:44.012150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623e60 is same with tWrite completed with error (sct=0, sc=8) 00:22:36.370 he state(6) to be set 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 [2024-12-05 21:15:44.012173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623e60 is same with the state(6) to be set 00:22:36.370 starting I/O failed: -6 00:22:36.370 [2024-12-05 21:15:44.012180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623e60 is same with the state(6) to be set 00:22:36.370 [2024-12-05 21:15:44.012190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623e60 is same with the state(6) to be set 00:22:36.370 [2024-12-05 21:15:44.012197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623e60 is same with the state(6) to be set 00:22:36.370 [2024-12-05 21:15:44.012203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623e60 is same with the state(6) to be set 00:22:36.370 [2024-12-05 21:15:44.012204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:36.370 [2024-12-05 21:15:44.012210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623e60 is same with the state(6) to be set 00:22:36.370 [2024-12-05 21:15:44.012218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623e60 is same with the state(6) to be set 00:22:36.370 [2024-12-05 21:15:44.012224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623e60 is same with the state(6) to be set 00:22:36.370 [2024-12-05 21:15:44.012230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x623e60 is same with the state(6) to be set 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 Write completed with error (sct=0, sc=8) 00:22:36.370 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 [2024-12-05 21:15:44.013207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 [2024-12-05 21:15:44.014764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.371 NVMe io qpair process completion error 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.371 starting I/O failed: -6 00:22:36.371 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 [2024-12-05 21:15:44.015738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 [2024-12-05 21:15:44.016599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 [2024-12-05 21:15:44.017636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.372 starting I/O failed: -6 00:22:36.372 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 [2024-12-05 21:15:44.019652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:36.373 NVMe io qpair process completion error 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 [2024-12-05 21:15:44.020589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 [2024-12-05 21:15:44.021489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 Write completed with error (sct=0, sc=8) 00:22:36.373 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 [2024-12-05 21:15:44.022515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 [2024-12-05 21:15:44.024402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:36.374 NVMe io qpair process completion error 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 Write completed with error (sct=0, sc=8) 00:22:36.374 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 [2024-12-05 21:15:44.025404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 [2024-12-05 21:15:44.026216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 [2024-12-05 21:15:44.027254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.375 Write completed with error (sct=0, sc=8) 00:22:36.375 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 [2024-12-05 21:15:44.032236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.376 NVMe io qpair process completion error 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 [2024-12-05 21:15:44.033197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 [2024-12-05 21:15:44.033996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.376 starting I/O failed: -6 00:22:36.376 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 [2024-12-05 21:15:44.035040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 [2024-12-05 21:15:44.036855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.377 NVMe io qpair process completion error 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 starting I/O failed: -6 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.377 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 [2024-12-05 21:15:44.039359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.378 starting I/O failed: -6 00:22:36.378 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 [2024-12-05 21:15:44.040924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.379 NVMe io qpair process completion error 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 [2024-12-05 21:15:44.041891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 [2024-12-05 21:15:44.042707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 Write completed with error (sct=0, sc=8) 00:22:36.379 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 [2024-12-05 21:15:44.043722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 [2024-12-05 21:15:44.049226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.380 NVMe io qpair process completion error 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 starting I/O failed: -6 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.380 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 [2024-12-05 21:15:44.050397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 [2024-12-05 21:15:44.051245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 [2024-12-05 21:15:44.052249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.381 Write completed with error (sct=0, sc=8) 00:22:36.381 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 starting I/O failed: -6 00:22:36.382 [2024-12-05 21:15:44.056682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.382 NVMe io qpair process completion error 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Write completed with error (sct=0, sc=8) 00:22:36.382 Initializing NVMe Controllers 00:22:36.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:36.382 Controller IO queue size 128, less than required. 00:22:36.382 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:36.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:36.382 Controller IO queue size 128, less than required. 00:22:36.382 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:36.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:36.382 Controller IO queue size 128, less than required. 00:22:36.382 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:36.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:36.382 Controller IO queue size 128, less than required. 00:22:36.382 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:36.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:36.382 Controller IO queue size 128, less than required. 00:22:36.383 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:36.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:36.383 Controller IO queue size 128, less than required. 00:22:36.383 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:36.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:36.383 Controller IO queue size 128, less than required. 00:22:36.383 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:36.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:36.383 Controller IO queue size 128, less than required. 00:22:36.383 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:36.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:36.383 Controller IO queue size 128, less than required. 00:22:36.383 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:36.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:36.383 Controller IO queue size 128, less than required. 00:22:36.383 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:36.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:36.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:36.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:36.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:36.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:36.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:36.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:36.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:36.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:36.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:36.383 Initialization complete. Launching workers. 00:22:36.383 ======================================================== 00:22:36.383 Latency(us) 00:22:36.383 Device Information : IOPS MiB/s Average min max 00:22:36.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2220.73 95.42 57638.47 854.31 112153.41 00:22:36.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2194.62 94.30 58335.29 665.47 129171.61 00:22:36.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2197.81 94.44 57696.46 717.01 104918.61 00:22:36.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2222.01 95.48 57630.99 512.43 127409.86 00:22:36.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2198.23 94.46 58137.82 554.20 122819.98 00:22:36.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2192.71 94.22 57827.12 656.14 99939.56 00:22:36.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2156.84 92.68 58798.18 895.73 99949.83 00:22:36.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2154.29 92.57 58883.42 693.50 101778.48 00:22:36.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2204.60 94.73 57554.27 729.07 104279.22 00:22:36.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2234.74 96.02 56828.18 453.01 102192.45 00:22:36.383 ======================================================== 00:22:36.383 Total : 21976.58 944.31 57926.88 453.01 129171.61 00:22:36.383 00:22:36.383 [2024-12-05 21:15:44.061910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9eae0 is same with the state(6) to be set 00:22:36.383 [2024-12-05 21:15:44.061959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9c560 is same with the state(6) to be set 00:22:36.383 [2024-12-05 21:15:44.061987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9e900 is same with the state(6) to be set 00:22:36.383 [2024-12-05 21:15:44.062016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9d740 is same with the state(6) to be set 00:22:36.383 [2024-12-05 21:15:44.062045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9cef0 is same with the state(6) to be set 00:22:36.383 [2024-12-05 21:15:44.062073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9cbc0 is same with the state(6) to be set 00:22:36.383 [2024-12-05 21:15:44.062101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9e720 is same with the state(6) to be set 00:22:36.383 [2024-12-05 21:15:44.062129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9d410 is same with the state(6) to be set 00:22:36.383 [2024-12-05 21:15:44.062156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9da70 is same with the state(6) to be set 00:22:36.383 [2024-12-05 21:15:44.062184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9c890 is same with the state(6) to be set 00:22:36.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:36.383 21:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1370698 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1370698 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1370698 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:37.319 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:37.319 rmmod nvme_tcp 00:22:37.319 rmmod nvme_fabrics 00:22:37.578 rmmod nvme_keyring 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1370426 ']' 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1370426 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1370426 ']' 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1370426 00:22:37.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1370426) - No such process 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1370426 is not found' 00:22:37.578 Process with pid 1370426 is not found 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.578 21:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.485 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:39.485 00:22:39.485 real 0m9.790s 00:22:39.485 user 0m24.925s 00:22:39.485 sys 0m5.168s 00:22:39.485 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:39.485 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:39.485 ************************************ 00:22:39.485 END TEST nvmf_shutdown_tc4 00:22:39.485 ************************************ 00:22:39.485 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:39.485 00:22:39.485 real 0m41.748s 00:22:39.485 user 1m44.347s 00:22:39.485 sys 0m14.125s 00:22:39.485 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:39.485 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:39.485 ************************************ 00:22:39.485 END TEST nvmf_shutdown 00:22:39.485 ************************************ 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:39.746 ************************************ 00:22:39.746 START TEST nvmf_nsid 00:22:39.746 ************************************ 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:39.746 * Looking for test storage... 00:22:39.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:39.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.746 --rc genhtml_branch_coverage=1 00:22:39.746 --rc genhtml_function_coverage=1 00:22:39.746 --rc genhtml_legend=1 00:22:39.746 --rc geninfo_all_blocks=1 00:22:39.746 --rc geninfo_unexecuted_blocks=1 00:22:39.746 00:22:39.746 ' 00:22:39.746 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:39.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.746 --rc genhtml_branch_coverage=1 00:22:39.746 --rc genhtml_function_coverage=1 00:22:39.746 --rc genhtml_legend=1 00:22:39.746 --rc geninfo_all_blocks=1 00:22:39.746 --rc geninfo_unexecuted_blocks=1 00:22:39.746 00:22:39.746 ' 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:39.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.747 --rc genhtml_branch_coverage=1 00:22:39.747 --rc genhtml_function_coverage=1 00:22:39.747 --rc genhtml_legend=1 00:22:39.747 --rc geninfo_all_blocks=1 00:22:39.747 --rc geninfo_unexecuted_blocks=1 00:22:39.747 00:22:39.747 ' 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:39.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.747 --rc genhtml_branch_coverage=1 00:22:39.747 --rc genhtml_function_coverage=1 00:22:39.747 --rc genhtml_legend=1 00:22:39.747 --rc geninfo_all_blocks=1 00:22:39.747 --rc geninfo_unexecuted_blocks=1 00:22:39.747 00:22:39.747 ' 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:39.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:39.747 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:40.007 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:40.007 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:40.007 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:40.007 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:40.007 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:40.007 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:40.007 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:40.007 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.007 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:40.007 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:40.007 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:40.007 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.007 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.007 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.007 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:40.007 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:40.007 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:40.007 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:46.584 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:46.584 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:46.584 Found net devices under 0000:86:00.0: cvl_0_0 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:46.584 Found net devices under 0000:86:00.1: cvl_0_1 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:46.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:22:46.584 00:22:46.584 --- 10.0.0.2 ping statistics --- 00:22:46.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.584 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:22:46.584 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:22:46.584 00:22:46.584 --- 10.0.0.1 ping statistics --- 00:22:46.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.584 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1375159 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1375159 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1375159 ']' 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.585 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:46.585 [2024-12-05 21:15:53.852987] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:22:46.585 [2024-12-05 21:15:53.853033] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.585 [2024-12-05 21:15:53.932597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.585 [2024-12-05 21:15:53.973446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.585 [2024-12-05 21:15:53.973482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.585 [2024-12-05 21:15:53.973489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.585 [2024-12-05 21:15:53.973496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.585 [2024-12-05 21:15:53.973501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.585 [2024-12-05 21:15:53.974048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1375271 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=8459164e-cfbc-43f9-9f05-e2b7f09611f7 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=4e988e1b-371d-4a0c-b020-74dbb1e61f28 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=d4283dda-edad-4cac-9162-eda99e89e4ae 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:46.585 null0 00:22:46.585 null1 00:22:46.585 [2024-12-05 21:15:54.160782] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:22:46.585 [2024-12-05 21:15:54.160827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1375271 ] 00:22:46.585 null2 00:22:46.585 [2024-12-05 21:15:54.166647] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.585 [2024-12-05 21:15:54.190832] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1375271 /var/tmp/tgt2.sock 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1375271 ']' 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:46.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:46.585 [2024-12-05 21:15:54.234648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.585 [2024-12-05 21:15:54.280217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:46.585 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:46.843 [2024-12-05 21:15:54.793383] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.843 [2024-12-05 21:15:54.809500] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:46.843 nvme0n1 nvme0n2 00:22:46.843 nvme1n1 00:22:46.843 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:46.843 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:46.843 21:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:48.216 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:48.216 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:48.216 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:48.216 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:48.216 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:48.216 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:48.216 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:48.216 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:48.216 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:48.216 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:48.216 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:48.216 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:48.216 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:49.150 21:15:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:49.150 21:15:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:49.150 21:15:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:49.150 21:15:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:49.150 21:15:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:49.150 21:15:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 8459164e-cfbc-43f9-9f05-e2b7f09611f7 00:22:49.150 21:15:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:49.150 21:15:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:49.150 21:15:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:49.150 21:15:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:49.150 21:15:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8459164ecfbc43f99f05e2b7f09611f7 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8459164ECFBC43F99F05E2B7F09611F7 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 8459164ECFBC43F99F05E2B7F09611F7 == \8\4\5\9\1\6\4\E\C\F\B\C\4\3\F\9\9\F\0\5\E\2\B\7\F\0\9\6\1\1\F\7 ]] 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 4e988e1b-371d-4a0c-b020-74dbb1e61f28 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4e988e1b371d4a0cb02074dbb1e61f28 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4E988E1B371D4A0CB02074DBB1E61F28 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 4E988E1B371D4A0CB02074DBB1E61F28 == \4\E\9\8\8\E\1\B\3\7\1\D\4\A\0\C\B\0\2\0\7\4\D\B\B\1\E\6\1\F\2\8 ]] 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid d4283dda-edad-4cac-9162-eda99e89e4ae 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d4283ddaedad4cac9162eda99e89e4ae 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D4283DDAEDAD4CAC9162EDA99E89E4AE 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ D4283DDAEDAD4CAC9162EDA99E89E4AE == \D\4\2\8\3\D\D\A\E\D\A\D\4\C\A\C\9\1\6\2\E\D\A\9\9\E\8\9\E\4\A\E ]] 00:22:49.150 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:49.408 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:49.408 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:49.408 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1375271 00:22:49.408 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1375271 ']' 00:22:49.408 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1375271 00:22:49.408 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:49.408 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.408 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1375271 00:22:49.408 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:49.408 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:49.408 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1375271' 00:22:49.408 killing process with pid 1375271 00:22:49.408 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1375271 00:22:49.408 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1375271 00:22:49.667 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:49.667 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:49.667 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:49.667 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:49.667 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:49.667 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:49.667 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:49.667 rmmod nvme_tcp 00:22:49.667 rmmod nvme_fabrics 00:22:49.667 rmmod nvme_keyring 00:22:49.667 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:49.667 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:49.667 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:49.667 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1375159 ']' 00:22:49.667 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1375159 00:22:49.667 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1375159 ']' 00:22:49.667 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1375159 00:22:49.667 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:49.667 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.667 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1375159 00:22:49.925 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:49.925 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:49.925 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1375159' 00:22:49.925 killing process with pid 1375159 00:22:49.925 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1375159 00:22:49.925 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1375159 00:22:49.926 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:49.926 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:49.926 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:49.926 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:49.926 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:49.926 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:49.926 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:49.926 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:49.926 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:49.926 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.926 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.926 21:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.455 21:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:52.455 00:22:52.455 real 0m12.403s 00:22:52.455 user 0m9.664s 00:22:52.455 sys 0m5.518s 00:22:52.455 21:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:52.455 21:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:52.455 ************************************ 00:22:52.455 END TEST nvmf_nsid 00:22:52.455 ************************************ 00:22:52.455 21:16:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:52.455 00:22:52.455 real 11m57.008s 00:22:52.455 user 25m37.117s 00:22:52.455 sys 3m40.383s 00:22:52.455 21:16:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:52.455 21:16:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:52.455 ************************************ 00:22:52.455 END TEST nvmf_target_extra 00:22:52.455 ************************************ 00:22:52.455 21:16:00 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:52.455 21:16:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:52.455 21:16:00 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:52.455 21:16:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:52.455 ************************************ 00:22:52.455 START TEST nvmf_host 00:22:52.455 ************************************ 00:22:52.455 21:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:52.455 * Looking for test storage... 00:22:52.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:52.455 21:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:52.455 21:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:52.455 21:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:52.455 21:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:52.455 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:52.455 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:52.455 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:52.455 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:52.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.456 --rc genhtml_branch_coverage=1 00:22:52.456 --rc genhtml_function_coverage=1 00:22:52.456 --rc genhtml_legend=1 00:22:52.456 --rc geninfo_all_blocks=1 00:22:52.456 --rc geninfo_unexecuted_blocks=1 00:22:52.456 00:22:52.456 ' 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:52.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.456 --rc genhtml_branch_coverage=1 00:22:52.456 --rc genhtml_function_coverage=1 00:22:52.456 --rc genhtml_legend=1 00:22:52.456 --rc geninfo_all_blocks=1 00:22:52.456 --rc geninfo_unexecuted_blocks=1 00:22:52.456 00:22:52.456 ' 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:52.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.456 --rc genhtml_branch_coverage=1 00:22:52.456 --rc genhtml_function_coverage=1 00:22:52.456 --rc genhtml_legend=1 00:22:52.456 --rc geninfo_all_blocks=1 00:22:52.456 --rc geninfo_unexecuted_blocks=1 00:22:52.456 00:22:52.456 ' 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:52.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.456 --rc genhtml_branch_coverage=1 00:22:52.456 --rc genhtml_function_coverage=1 00:22:52.456 --rc genhtml_legend=1 00:22:52.456 --rc geninfo_all_blocks=1 00:22:52.456 --rc geninfo_unexecuted_blocks=1 00:22:52.456 00:22:52.456 ' 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:52.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.456 ************************************ 00:22:52.456 START TEST nvmf_multicontroller 00:22:52.456 ************************************ 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:52.456 * Looking for test storage... 00:22:52.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:22:52.456 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:52.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.715 --rc genhtml_branch_coverage=1 00:22:52.715 --rc genhtml_function_coverage=1 00:22:52.715 --rc genhtml_legend=1 00:22:52.715 --rc geninfo_all_blocks=1 00:22:52.715 --rc geninfo_unexecuted_blocks=1 00:22:52.715 00:22:52.715 ' 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:52.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.715 --rc genhtml_branch_coverage=1 00:22:52.715 --rc genhtml_function_coverage=1 00:22:52.715 --rc genhtml_legend=1 00:22:52.715 --rc geninfo_all_blocks=1 00:22:52.715 --rc geninfo_unexecuted_blocks=1 00:22:52.715 00:22:52.715 ' 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:52.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.715 --rc genhtml_branch_coverage=1 00:22:52.715 --rc genhtml_function_coverage=1 00:22:52.715 --rc genhtml_legend=1 00:22:52.715 --rc geninfo_all_blocks=1 00:22:52.715 --rc geninfo_unexecuted_blocks=1 00:22:52.715 00:22:52.715 ' 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:52.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.715 --rc genhtml_branch_coverage=1 00:22:52.715 --rc genhtml_function_coverage=1 00:22:52.715 --rc genhtml_legend=1 00:22:52.715 --rc geninfo_all_blocks=1 00:22:52.715 --rc geninfo_unexecuted_blocks=1 00:22:52.715 00:22:52.715 ' 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.715 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:52.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:52.716 21:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.282 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.282 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:59.282 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:59.282 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:59.282 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:59.282 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:59.282 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:59.282 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:59.282 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:59.282 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:59.282 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:59.282 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:59.282 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:59.283 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:59.283 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:59.283 Found net devices under 0000:86:00.0: cvl_0_0 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:59.283 Found net devices under 0000:86:00.1: cvl_0_1 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.283 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:59.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:22:59.284 00:22:59.284 --- 10.0.0.2 ping statistics --- 00:22:59.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.284 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:22:59.284 00:22:59.284 --- 10.0.0.1 ping statistics --- 00:22:59.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.284 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1379487 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1379487 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1379487 ']' 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.284 21:16:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.284 [2024-12-05 21:16:06.596414] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:22:59.284 [2024-12-05 21:16:06.596459] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.284 [2024-12-05 21:16:06.676719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:59.284 [2024-12-05 21:16:06.719840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.284 [2024-12-05 21:16:06.719872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.284 [2024-12-05 21:16:06.719879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.284 [2024-12-05 21:16:06.719885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.284 [2024-12-05 21:16:06.719890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.284 [2024-12-05 21:16:06.721220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.284 [2024-12-05 21:16:06.721325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.284 [2024-12-05 21:16:06.721327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.544 [2024-12-05 21:16:07.479534] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.544 Malloc0 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.544 [2024-12-05 21:16:07.550970] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.544 [2024-12-05 21:16:07.558911] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.544 Malloc1 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:59.544 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.545 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.545 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.545 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1379731 00:22:59.545 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:59.545 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:59.545 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1379731 /var/tmp/bdevperf.sock 00:22:59.545 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1379731 ']' 00:22:59.545 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.545 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.545 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.545 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.545 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.804 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.804 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:59.804 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:59.804 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.804 21:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.064 NVMe0n1 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.064 1 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.064 request: 00:23:00.064 { 00:23:00.064 "name": "NVMe0", 00:23:00.064 "trtype": "tcp", 00:23:00.064 "traddr": "10.0.0.2", 00:23:00.064 "adrfam": "ipv4", 00:23:00.064 "trsvcid": "4420", 00:23:00.064 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.064 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:00.064 "hostaddr": "10.0.0.1", 00:23:00.064 "prchk_reftag": false, 00:23:00.064 "prchk_guard": false, 00:23:00.064 "hdgst": false, 00:23:00.064 "ddgst": false, 00:23:00.064 "allow_unrecognized_csi": false, 00:23:00.064 "method": "bdev_nvme_attach_controller", 00:23:00.064 "req_id": 1 00:23:00.064 } 00:23:00.064 Got JSON-RPC error response 00:23:00.064 response: 00:23:00.064 { 00:23:00.064 "code": -114, 00:23:00.064 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:00.064 } 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.064 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:00.065 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.065 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.065 request: 00:23:00.065 { 00:23:00.065 "name": "NVMe0", 00:23:00.065 "trtype": "tcp", 00:23:00.065 "traddr": "10.0.0.2", 00:23:00.065 "adrfam": "ipv4", 00:23:00.065 "trsvcid": "4420", 00:23:00.065 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:00.065 "hostaddr": "10.0.0.1", 00:23:00.065 "prchk_reftag": false, 00:23:00.065 "prchk_guard": false, 00:23:00.065 "hdgst": false, 00:23:00.065 "ddgst": false, 00:23:00.065 "allow_unrecognized_csi": false, 00:23:00.324 "method": "bdev_nvme_attach_controller", 00:23:00.324 "req_id": 1 00:23:00.324 } 00:23:00.324 Got JSON-RPC error response 00:23:00.324 response: 00:23:00.324 { 00:23:00.324 "code": -114, 00:23:00.324 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:00.324 } 00:23:00.324 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:00.324 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:00.324 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.324 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.324 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.324 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:00.324 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:00.324 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:00.324 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:00.324 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.324 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:00.324 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.324 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:00.324 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.324 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.324 request: 00:23:00.324 { 00:23:00.324 "name": "NVMe0", 00:23:00.324 "trtype": "tcp", 00:23:00.324 "traddr": "10.0.0.2", 00:23:00.324 "adrfam": "ipv4", 00:23:00.324 "trsvcid": "4420", 00:23:00.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.324 "hostaddr": "10.0.0.1", 00:23:00.324 "prchk_reftag": false, 00:23:00.324 "prchk_guard": false, 00:23:00.324 "hdgst": false, 00:23:00.324 "ddgst": false, 00:23:00.325 "multipath": "disable", 00:23:00.325 "allow_unrecognized_csi": false, 00:23:00.325 "method": "bdev_nvme_attach_controller", 00:23:00.325 "req_id": 1 00:23:00.325 } 00:23:00.325 Got JSON-RPC error response 00:23:00.325 response: 00:23:00.325 { 00:23:00.325 "code": -114, 00:23:00.325 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:00.325 } 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.325 request: 00:23:00.325 { 00:23:00.325 "name": "NVMe0", 00:23:00.325 "trtype": "tcp", 00:23:00.325 "traddr": "10.0.0.2", 00:23:00.325 "adrfam": "ipv4", 00:23:00.325 "trsvcid": "4420", 00:23:00.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.325 "hostaddr": "10.0.0.1", 00:23:00.325 "prchk_reftag": false, 00:23:00.325 "prchk_guard": false, 00:23:00.325 "hdgst": false, 00:23:00.325 "ddgst": false, 00:23:00.325 "multipath": "failover", 00:23:00.325 "allow_unrecognized_csi": false, 00:23:00.325 "method": "bdev_nvme_attach_controller", 00:23:00.325 "req_id": 1 00:23:00.325 } 00:23:00.325 Got JSON-RPC error response 00:23:00.325 response: 00:23:00.325 { 00:23:00.325 "code": -114, 00:23:00.325 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:00.325 } 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.325 NVMe0n1 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.325 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.585 00:23:00.585 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.585 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:00.585 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:00.585 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.585 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.585 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.585 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:00.585 21:16:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:01.962 { 00:23:01.962 "results": [ 00:23:01.962 { 00:23:01.962 "job": "NVMe0n1", 00:23:01.962 "core_mask": "0x1", 00:23:01.962 "workload": "write", 00:23:01.962 "status": "finished", 00:23:01.962 "queue_depth": 128, 00:23:01.962 "io_size": 4096, 00:23:01.962 "runtime": 1.00651, 00:23:01.962 "iops": 23596.38751726262, 00:23:01.962 "mibps": 92.17338873930711, 00:23:01.962 "io_failed": 0, 00:23:01.962 "io_timeout": 0, 00:23:01.962 "avg_latency_us": 5407.992613854636, 00:23:01.962 "min_latency_us": 4150.613333333334, 00:23:01.962 "max_latency_us": 13044.784761904762 00:23:01.962 } 00:23:01.962 ], 00:23:01.962 "core_count": 1 00:23:01.962 } 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1379731 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1379731 ']' 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1379731 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1379731 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1379731' 00:23:01.962 killing process with pid 1379731 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1379731 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1379731 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:01.962 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:01.963 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:01.963 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:01.963 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:01.963 [2024-12-05 21:16:07.663265] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:23:01.963 [2024-12-05 21:16:07.663318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379731 ] 00:23:01.963 [2024-12-05 21:16:07.740756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.963 [2024-12-05 21:16:07.781996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.963 [2024-12-05 21:16:08.565380] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 296b856a-cb10-4d94-afec-ce6ab482cf9f already exists 00:23:01.963 [2024-12-05 21:16:08.565408] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:296b856a-cb10-4d94-afec-ce6ab482cf9f alias for bdev NVMe1n1 00:23:01.963 [2024-12-05 21:16:08.565417] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:01.963 Running I/O for 1 seconds... 00:23:01.963 23590.00 IOPS, 92.15 MiB/s 00:23:01.963 Latency(us) 00:23:01.963 [2024-12-05T20:16:10.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.963 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:01.963 NVMe0n1 : 1.01 23596.39 92.17 0.00 0.00 5407.99 4150.61 13044.78 00:23:01.963 [2024-12-05T20:16:10.071Z] =================================================================================================================== 00:23:01.963 [2024-12-05T20:16:10.071Z] Total : 23596.39 92.17 0.00 0.00 5407.99 4150.61 13044.78 00:23:01.963 Received shutdown signal, test time was about 1.000000 seconds 00:23:01.963 00:23:01.963 Latency(us) 00:23:01.963 [2024-12-05T20:16:10.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.963 [2024-12-05T20:16:10.071Z] =================================================================================================================== 00:23:01.963 [2024-12-05T20:16:10.071Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.963 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:01.963 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:01.963 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:01.963 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:01.963 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:01.963 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:01.963 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:01.963 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:01.963 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:01.963 21:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:01.963 rmmod nvme_tcp 00:23:01.963 rmmod nvme_fabrics 00:23:01.963 rmmod nvme_keyring 00:23:01.963 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:01.963 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:01.963 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:01.963 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1379487 ']' 00:23:01.963 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1379487 00:23:01.963 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1379487 ']' 00:23:01.963 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1379487 00:23:01.963 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:01.963 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.963 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1379487 00:23:02.223 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:02.223 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:02.223 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1379487' 00:23:02.223 killing process with pid 1379487 00:23:02.223 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1379487 00:23:02.223 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1379487 00:23:02.223 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:02.223 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:02.223 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:02.223 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:02.223 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:02.223 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:02.223 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:02.223 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:02.223 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:02.223 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.223 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.223 21:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:04.762 00:23:04.762 real 0m11.960s 00:23:04.762 user 0m14.959s 00:23:04.762 sys 0m5.248s 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.762 ************************************ 00:23:04.762 END TEST nvmf_multicontroller 00:23:04.762 ************************************ 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.762 ************************************ 00:23:04.762 START TEST nvmf_aer 00:23:04.762 ************************************ 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:04.762 * Looking for test storage... 00:23:04.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:04.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.762 --rc genhtml_branch_coverage=1 00:23:04.762 --rc genhtml_function_coverage=1 00:23:04.762 --rc genhtml_legend=1 00:23:04.762 --rc geninfo_all_blocks=1 00:23:04.762 --rc geninfo_unexecuted_blocks=1 00:23:04.762 00:23:04.762 ' 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:04.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.762 --rc genhtml_branch_coverage=1 00:23:04.762 --rc genhtml_function_coverage=1 00:23:04.762 --rc genhtml_legend=1 00:23:04.762 --rc geninfo_all_blocks=1 00:23:04.762 --rc geninfo_unexecuted_blocks=1 00:23:04.762 00:23:04.762 ' 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:04.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.762 --rc genhtml_branch_coverage=1 00:23:04.762 --rc genhtml_function_coverage=1 00:23:04.762 --rc genhtml_legend=1 00:23:04.762 --rc geninfo_all_blocks=1 00:23:04.762 --rc geninfo_unexecuted_blocks=1 00:23:04.762 00:23:04.762 ' 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:04.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.762 --rc genhtml_branch_coverage=1 00:23:04.762 --rc genhtml_function_coverage=1 00:23:04.762 --rc genhtml_legend=1 00:23:04.762 --rc geninfo_all_blocks=1 00:23:04.762 --rc geninfo_unexecuted_blocks=1 00:23:04.762 00:23:04.762 ' 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.762 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:04.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:04.763 21:16:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.207 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:10.208 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:10.208 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:10.208 Found net devices under 0000:86:00.0: cvl_0_0 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:10.208 Found net devices under 0000:86:00.1: cvl_0_1 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:10.208 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:10.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:23:10.467 00:23:10.467 --- 10.0.0.2 ping statistics --- 00:23:10.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.467 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:10.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:23:10.467 00:23:10.467 --- 10.0.0.1 ping statistics --- 00:23:10.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.467 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:10.467 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:10.760 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:10.760 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:10.760 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:10.760 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.760 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1383623 00:23:10.760 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:10.760 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1383623 00:23:10.760 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1383623 ']' 00:23:10.760 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.760 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.760 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.760 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.760 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.760 [2024-12-05 21:16:18.658583] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:23:10.760 [2024-12-05 21:16:18.658632] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.760 [2024-12-05 21:16:18.737239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:10.760 [2024-12-05 21:16:18.780017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.760 [2024-12-05 21:16:18.780053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.760 [2024-12-05 21:16:18.780060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.760 [2024-12-05 21:16:18.780066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.760 [2024-12-05 21:16:18.780071] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.760 [2024-12-05 21:16:18.781540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.760 [2024-12-05 21:16:18.781577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.760 [2024-12-05 21:16:18.781683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.760 [2024-12-05 21:16:18.781685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.019 [2024-12-05 21:16:18.920412] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.019 Malloc0 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.019 [2024-12-05 21:16:18.984408] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.019 21:16:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.019 [ 00:23:11.019 { 00:23:11.019 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:11.019 "subtype": "Discovery", 00:23:11.019 "listen_addresses": [], 00:23:11.019 "allow_any_host": true, 00:23:11.019 "hosts": [] 00:23:11.019 }, 00:23:11.019 { 00:23:11.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.019 "subtype": "NVMe", 00:23:11.019 "listen_addresses": [ 00:23:11.019 { 00:23:11.019 "trtype": "TCP", 00:23:11.019 "adrfam": "IPv4", 00:23:11.019 "traddr": "10.0.0.2", 00:23:11.019 "trsvcid": "4420" 00:23:11.019 } 00:23:11.019 ], 00:23:11.019 "allow_any_host": true, 00:23:11.019 "hosts": [], 00:23:11.019 "serial_number": "SPDK00000000000001", 00:23:11.020 "model_number": "SPDK bdev Controller", 00:23:11.020 "max_namespaces": 2, 00:23:11.020 "min_cntlid": 1, 00:23:11.020 "max_cntlid": 65519, 00:23:11.020 "namespaces": [ 00:23:11.020 { 00:23:11.020 "nsid": 1, 00:23:11.020 "bdev_name": "Malloc0", 00:23:11.020 "name": "Malloc0", 00:23:11.020 "nguid": "99F3BE5D350D4AE0952AC24B2254893C", 00:23:11.020 "uuid": "99f3be5d-350d-4ae0-952a-c24b2254893c" 00:23:11.020 } 00:23:11.020 ] 00:23:11.020 } 00:23:11.020 ] 00:23:11.020 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.020 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:11.020 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:11.020 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1383762 00:23:11.020 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:11.020 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:11.020 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:11.020 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:11.020 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:11.020 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:11.020 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:11.020 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:11.020 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:11.020 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:11.020 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:11.278 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:11.278 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:11.278 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:11.278 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:11.278 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.278 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.278 Malloc1 00:23:11.278 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.278 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:11.278 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.278 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.278 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.278 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:11.278 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.278 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.278 Asynchronous Event Request test 00:23:11.278 Attaching to 10.0.0.2 00:23:11.278 Attached to 10.0.0.2 00:23:11.278 Registering asynchronous event callbacks... 00:23:11.278 Starting namespace attribute notice tests for all controllers... 00:23:11.278 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:11.278 aer_cb - Changed Namespace 00:23:11.278 Cleaning up... 00:23:11.278 [ 00:23:11.278 { 00:23:11.278 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:11.279 "subtype": "Discovery", 00:23:11.279 "listen_addresses": [], 00:23:11.279 "allow_any_host": true, 00:23:11.279 "hosts": [] 00:23:11.279 }, 00:23:11.279 { 00:23:11.279 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.279 "subtype": "NVMe", 00:23:11.279 "listen_addresses": [ 00:23:11.279 { 00:23:11.279 "trtype": "TCP", 00:23:11.279 "adrfam": "IPv4", 00:23:11.279 "traddr": "10.0.0.2", 00:23:11.279 "trsvcid": "4420" 00:23:11.279 } 00:23:11.279 ], 00:23:11.279 "allow_any_host": true, 00:23:11.279 "hosts": [], 00:23:11.279 "serial_number": "SPDK00000000000001", 00:23:11.279 "model_number": "SPDK bdev Controller", 00:23:11.279 "max_namespaces": 2, 00:23:11.279 "min_cntlid": 1, 00:23:11.279 "max_cntlid": 65519, 00:23:11.279 "namespaces": [ 00:23:11.279 { 00:23:11.279 "nsid": 1, 00:23:11.279 "bdev_name": "Malloc0", 00:23:11.279 "name": "Malloc0", 00:23:11.279 "nguid": "99F3BE5D350D4AE0952AC24B2254893C", 00:23:11.279 "uuid": "99f3be5d-350d-4ae0-952a-c24b2254893c" 00:23:11.279 }, 00:23:11.279 { 00:23:11.279 "nsid": 2, 00:23:11.279 "bdev_name": "Malloc1", 00:23:11.279 "name": "Malloc1", 00:23:11.279 "nguid": "DB2F5D414F8E459DBE95727E1F5DD435", 00:23:11.279 "uuid": "db2f5d41-4f8e-459d-be95-727e1f5dd435" 00:23:11.279 } 00:23:11.279 ] 00:23:11.279 } 00:23:11.279 ] 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1383762 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:11.279 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:11.279 rmmod nvme_tcp 00:23:11.279 rmmod nvme_fabrics 00:23:11.537 rmmod nvme_keyring 00:23:11.537 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:11.537 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:11.537 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:11.537 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1383623 ']' 00:23:11.537 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1383623 00:23:11.537 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1383623 ']' 00:23:11.537 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1383623 00:23:11.537 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:11.537 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.538 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1383623 00:23:11.538 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:11.538 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:11.538 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1383623' 00:23:11.538 killing process with pid 1383623 00:23:11.538 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1383623 00:23:11.538 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1383623 00:23:11.538 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:11.538 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:11.538 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:11.538 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:11.538 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:11.538 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:11.538 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:11.538 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:11.538 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:11.538 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.538 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.538 21:16:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.075 21:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:14.075 00:23:14.075 real 0m9.273s 00:23:14.075 user 0m5.121s 00:23:14.075 sys 0m4.909s 00:23:14.075 21:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:14.075 21:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:14.075 ************************************ 00:23:14.075 END TEST nvmf_aer 00:23:14.075 ************************************ 00:23:14.075 21:16:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:14.075 21:16:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:14.075 21:16:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:14.075 21:16:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.075 ************************************ 00:23:14.075 START TEST nvmf_async_init 00:23:14.075 ************************************ 00:23:14.075 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:14.075 * Looking for test storage... 00:23:14.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:14.075 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:14.075 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:23:14.075 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:14.075 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:14.075 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:14.075 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:14.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.076 --rc genhtml_branch_coverage=1 00:23:14.076 --rc genhtml_function_coverage=1 00:23:14.076 --rc genhtml_legend=1 00:23:14.076 --rc geninfo_all_blocks=1 00:23:14.076 --rc geninfo_unexecuted_blocks=1 00:23:14.076 00:23:14.076 ' 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:14.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.076 --rc genhtml_branch_coverage=1 00:23:14.076 --rc genhtml_function_coverage=1 00:23:14.076 --rc genhtml_legend=1 00:23:14.076 --rc geninfo_all_blocks=1 00:23:14.076 --rc geninfo_unexecuted_blocks=1 00:23:14.076 00:23:14.076 ' 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:14.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.076 --rc genhtml_branch_coverage=1 00:23:14.076 --rc genhtml_function_coverage=1 00:23:14.076 --rc genhtml_legend=1 00:23:14.076 --rc geninfo_all_blocks=1 00:23:14.076 --rc geninfo_unexecuted_blocks=1 00:23:14.076 00:23:14.076 ' 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:14.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.076 --rc genhtml_branch_coverage=1 00:23:14.076 --rc genhtml_function_coverage=1 00:23:14.076 --rc genhtml_legend=1 00:23:14.076 --rc geninfo_all_blocks=1 00:23:14.076 --rc geninfo_unexecuted_blocks=1 00:23:14.076 00:23:14.076 ' 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:14.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=2770271d36b440b7aa32fc3f91f69267 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.076 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.077 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.077 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:14.077 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:14.077 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:14.077 21:16:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:20.648 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:20.648 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:20.648 Found net devices under 0000:86:00.0: cvl_0_0 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:20.648 Found net devices under 0000:86:00.1: cvl_0_1 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.648 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:20.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:23:20.649 00:23:20.649 --- 10.0.0.2 ping statistics --- 00:23:20.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.649 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:23:20.649 00:23:20.649 --- 10.0.0.1 ping statistics --- 00:23:20.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.649 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1387286 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1387286 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1387286 ']' 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.649 21:16:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.649 [2024-12-05 21:16:27.978555] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:23:20.649 [2024-12-05 21:16:27.978597] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.649 [2024-12-05 21:16:28.056757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.649 [2024-12-05 21:16:28.095347] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.649 [2024-12-05 21:16:28.095385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.649 [2024-12-05 21:16:28.095392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.649 [2024-12-05 21:16:28.095398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.649 [2024-12-05 21:16:28.095402] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.649 [2024-12-05 21:16:28.095946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.649 [2024-12-05 21:16:28.244005] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.649 null0 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 2770271d36b440b7aa32fc3f91f69267 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.649 [2024-12-05 21:16:28.296269] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.649 nvme0n1 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.649 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.649 [ 00:23:20.649 { 00:23:20.649 "name": "nvme0n1", 00:23:20.649 "aliases": [ 00:23:20.649 "2770271d-36b4-40b7-aa32-fc3f91f69267" 00:23:20.649 ], 00:23:20.649 "product_name": "NVMe disk", 00:23:20.649 "block_size": 512, 00:23:20.649 "num_blocks": 2097152, 00:23:20.649 "uuid": "2770271d-36b4-40b7-aa32-fc3f91f69267", 00:23:20.649 "numa_id": 1, 00:23:20.649 "assigned_rate_limits": { 00:23:20.649 "rw_ios_per_sec": 0, 00:23:20.649 "rw_mbytes_per_sec": 0, 00:23:20.649 "r_mbytes_per_sec": 0, 00:23:20.649 "w_mbytes_per_sec": 0 00:23:20.649 }, 00:23:20.649 "claimed": false, 00:23:20.649 "zoned": false, 00:23:20.649 "supported_io_types": { 00:23:20.649 "read": true, 00:23:20.649 "write": true, 00:23:20.649 "unmap": false, 00:23:20.649 "flush": true, 00:23:20.649 "reset": true, 00:23:20.649 "nvme_admin": true, 00:23:20.649 "nvme_io": true, 00:23:20.649 "nvme_io_md": false, 00:23:20.649 "write_zeroes": true, 00:23:20.649 "zcopy": false, 00:23:20.649 "get_zone_info": false, 00:23:20.649 "zone_management": false, 00:23:20.649 "zone_append": false, 00:23:20.649 "compare": true, 00:23:20.649 "compare_and_write": true, 00:23:20.649 "abort": true, 00:23:20.649 "seek_hole": false, 00:23:20.649 "seek_data": false, 00:23:20.649 "copy": true, 00:23:20.649 "nvme_iov_md": false 00:23:20.649 }, 00:23:20.649 "memory_domains": [ 00:23:20.649 { 00:23:20.649 "dma_device_id": "system", 00:23:20.649 "dma_device_type": 1 00:23:20.649 } 00:23:20.649 ], 00:23:20.649 "driver_specific": { 00:23:20.649 "nvme": [ 00:23:20.649 { 00:23:20.649 "trid": { 00:23:20.649 "trtype": "TCP", 00:23:20.649 "adrfam": "IPv4", 00:23:20.649 "traddr": "10.0.0.2", 00:23:20.649 "trsvcid": "4420", 00:23:20.649 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:20.649 }, 00:23:20.649 "ctrlr_data": { 00:23:20.649 "cntlid": 1, 00:23:20.649 "vendor_id": "0x8086", 00:23:20.649 "model_number": "SPDK bdev Controller", 00:23:20.649 "serial_number": "00000000000000000000", 00:23:20.650 "firmware_revision": "25.01", 00:23:20.650 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:20.650 "oacs": { 00:23:20.650 "security": 0, 00:23:20.650 "format": 0, 00:23:20.650 "firmware": 0, 00:23:20.650 "ns_manage": 0 00:23:20.650 }, 00:23:20.650 "multi_ctrlr": true, 00:23:20.650 "ana_reporting": false 00:23:20.650 }, 00:23:20.650 "vs": { 00:23:20.650 "nvme_version": "1.3" 00:23:20.650 }, 00:23:20.650 "ns_data": { 00:23:20.650 "id": 1, 00:23:20.650 "can_share": true 00:23:20.650 } 00:23:20.650 } 00:23:20.650 ], 00:23:20.650 "mp_policy": "active_passive" 00:23:20.650 } 00:23:20.650 } 00:23:20.650 ] 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.650 [2024-12-05 21:16:28.560795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:20.650 [2024-12-05 21:16:28.560849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a8c00 (9): Bad file descriptor 00:23:20.650 [2024-12-05 21:16:28.692441] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.650 [ 00:23:20.650 { 00:23:20.650 "name": "nvme0n1", 00:23:20.650 "aliases": [ 00:23:20.650 "2770271d-36b4-40b7-aa32-fc3f91f69267" 00:23:20.650 ], 00:23:20.650 "product_name": "NVMe disk", 00:23:20.650 "block_size": 512, 00:23:20.650 "num_blocks": 2097152, 00:23:20.650 "uuid": "2770271d-36b4-40b7-aa32-fc3f91f69267", 00:23:20.650 "numa_id": 1, 00:23:20.650 "assigned_rate_limits": { 00:23:20.650 "rw_ios_per_sec": 0, 00:23:20.650 "rw_mbytes_per_sec": 0, 00:23:20.650 "r_mbytes_per_sec": 0, 00:23:20.650 "w_mbytes_per_sec": 0 00:23:20.650 }, 00:23:20.650 "claimed": false, 00:23:20.650 "zoned": false, 00:23:20.650 "supported_io_types": { 00:23:20.650 "read": true, 00:23:20.650 "write": true, 00:23:20.650 "unmap": false, 00:23:20.650 "flush": true, 00:23:20.650 "reset": true, 00:23:20.650 "nvme_admin": true, 00:23:20.650 "nvme_io": true, 00:23:20.650 "nvme_io_md": false, 00:23:20.650 "write_zeroes": true, 00:23:20.650 "zcopy": false, 00:23:20.650 "get_zone_info": false, 00:23:20.650 "zone_management": false, 00:23:20.650 "zone_append": false, 00:23:20.650 "compare": true, 00:23:20.650 "compare_and_write": true, 00:23:20.650 "abort": true, 00:23:20.650 "seek_hole": false, 00:23:20.650 "seek_data": false, 00:23:20.650 "copy": true, 00:23:20.650 "nvme_iov_md": false 00:23:20.650 }, 00:23:20.650 "memory_domains": [ 00:23:20.650 { 00:23:20.650 "dma_device_id": "system", 00:23:20.650 "dma_device_type": 1 00:23:20.650 } 00:23:20.650 ], 00:23:20.650 "driver_specific": { 00:23:20.650 "nvme": [ 00:23:20.650 { 00:23:20.650 "trid": { 00:23:20.650 "trtype": "TCP", 00:23:20.650 "adrfam": "IPv4", 00:23:20.650 "traddr": "10.0.0.2", 00:23:20.650 "trsvcid": "4420", 00:23:20.650 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:20.650 }, 00:23:20.650 "ctrlr_data": { 00:23:20.650 "cntlid": 2, 00:23:20.650 "vendor_id": "0x8086", 00:23:20.650 "model_number": "SPDK bdev Controller", 00:23:20.650 "serial_number": "00000000000000000000", 00:23:20.650 "firmware_revision": "25.01", 00:23:20.650 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:20.650 "oacs": { 00:23:20.650 "security": 0, 00:23:20.650 "format": 0, 00:23:20.650 "firmware": 0, 00:23:20.650 "ns_manage": 0 00:23:20.650 }, 00:23:20.650 "multi_ctrlr": true, 00:23:20.650 "ana_reporting": false 00:23:20.650 }, 00:23:20.650 "vs": { 00:23:20.650 "nvme_version": "1.3" 00:23:20.650 }, 00:23:20.650 "ns_data": { 00:23:20.650 "id": 1, 00:23:20.650 "can_share": true 00:23:20.650 } 00:23:20.650 } 00:23:20.650 ], 00:23:20.650 "mp_policy": "active_passive" 00:23:20.650 } 00:23:20.650 } 00:23:20.650 ] 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.paWAuMxbtd 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.paWAuMxbtd 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.paWAuMxbtd 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.650 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.910 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.910 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:20.910 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.910 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.910 [2024-12-05 21:16:28.765414] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:20.910 [2024-12-05 21:16:28.765516] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:20.910 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.910 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:20.910 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.910 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.910 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.910 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:20.910 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.910 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.910 [2024-12-05 21:16:28.785479] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.910 nvme0n1 00:23:20.910 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.910 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:20.910 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.910 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.910 [ 00:23:20.910 { 00:23:20.910 "name": "nvme0n1", 00:23:20.910 "aliases": [ 00:23:20.910 "2770271d-36b4-40b7-aa32-fc3f91f69267" 00:23:20.910 ], 00:23:20.910 "product_name": "NVMe disk", 00:23:20.910 "block_size": 512, 00:23:20.910 "num_blocks": 2097152, 00:23:20.910 "uuid": "2770271d-36b4-40b7-aa32-fc3f91f69267", 00:23:20.910 "numa_id": 1, 00:23:20.910 "assigned_rate_limits": { 00:23:20.910 "rw_ios_per_sec": 0, 00:23:20.910 "rw_mbytes_per_sec": 0, 00:23:20.910 "r_mbytes_per_sec": 0, 00:23:20.910 "w_mbytes_per_sec": 0 00:23:20.910 }, 00:23:20.910 "claimed": false, 00:23:20.910 "zoned": false, 00:23:20.910 "supported_io_types": { 00:23:20.910 "read": true, 00:23:20.910 "write": true, 00:23:20.910 "unmap": false, 00:23:20.910 "flush": true, 00:23:20.910 "reset": true, 00:23:20.910 "nvme_admin": true, 00:23:20.910 "nvme_io": true, 00:23:20.910 "nvme_io_md": false, 00:23:20.910 "write_zeroes": true, 00:23:20.910 "zcopy": false, 00:23:20.910 "get_zone_info": false, 00:23:20.910 "zone_management": false, 00:23:20.910 "zone_append": false, 00:23:20.910 "compare": true, 00:23:20.910 "compare_and_write": true, 00:23:20.910 "abort": true, 00:23:20.910 "seek_hole": false, 00:23:20.910 "seek_data": false, 00:23:20.910 "copy": true, 00:23:20.910 "nvme_iov_md": false 00:23:20.910 }, 00:23:20.910 "memory_domains": [ 00:23:20.910 { 00:23:20.910 "dma_device_id": "system", 00:23:20.910 "dma_device_type": 1 00:23:20.910 } 00:23:20.910 ], 00:23:20.910 "driver_specific": { 00:23:20.910 "nvme": [ 00:23:20.910 { 00:23:20.910 "trid": { 00:23:20.910 "trtype": "TCP", 00:23:20.910 "adrfam": "IPv4", 00:23:20.910 "traddr": "10.0.0.2", 00:23:20.910 "trsvcid": "4421", 00:23:20.910 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:20.910 }, 00:23:20.910 "ctrlr_data": { 00:23:20.910 "cntlid": 3, 00:23:20.910 "vendor_id": "0x8086", 00:23:20.910 "model_number": "SPDK bdev Controller", 00:23:20.910 "serial_number": "00000000000000000000", 00:23:20.910 "firmware_revision": "25.01", 00:23:20.910 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:20.910 "oacs": { 00:23:20.910 "security": 0, 00:23:20.910 "format": 0, 00:23:20.910 "firmware": 0, 00:23:20.910 "ns_manage": 0 00:23:20.910 }, 00:23:20.910 "multi_ctrlr": true, 00:23:20.910 "ana_reporting": false 00:23:20.910 }, 00:23:20.910 "vs": { 00:23:20.910 "nvme_version": "1.3" 00:23:20.910 }, 00:23:20.910 "ns_data": { 00:23:20.910 "id": 1, 00:23:20.910 "can_share": true 00:23:20.910 } 00:23:20.910 } 00:23:20.911 ], 00:23:20.911 "mp_policy": "active_passive" 00:23:20.911 } 00:23:20.911 } 00:23:20.911 ] 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.paWAuMxbtd 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:20.911 rmmod nvme_tcp 00:23:20.911 rmmod nvme_fabrics 00:23:20.911 rmmod nvme_keyring 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1387286 ']' 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1387286 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1387286 ']' 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1387286 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1387286 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1387286' 00:23:20.911 killing process with pid 1387286 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1387286 00:23:20.911 21:16:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1387286 00:23:21.170 21:16:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:21.170 21:16:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:21.170 21:16:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:21.170 21:16:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:21.170 21:16:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:21.170 21:16:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:21.170 21:16:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:21.170 21:16:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:21.170 21:16:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:21.170 21:16:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.170 21:16:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.170 21:16:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:23.705 00:23:23.705 real 0m9.440s 00:23:23.705 user 0m3.086s 00:23:23.705 sys 0m4.795s 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.705 ************************************ 00:23:23.705 END TEST nvmf_async_init 00:23:23.705 ************************************ 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.705 ************************************ 00:23:23.705 START TEST dma 00:23:23.705 ************************************ 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:23.705 * Looking for test storage... 00:23:23.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:23.705 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:23.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.706 --rc genhtml_branch_coverage=1 00:23:23.706 --rc genhtml_function_coverage=1 00:23:23.706 --rc genhtml_legend=1 00:23:23.706 --rc geninfo_all_blocks=1 00:23:23.706 --rc geninfo_unexecuted_blocks=1 00:23:23.706 00:23:23.706 ' 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:23.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.706 --rc genhtml_branch_coverage=1 00:23:23.706 --rc genhtml_function_coverage=1 00:23:23.706 --rc genhtml_legend=1 00:23:23.706 --rc geninfo_all_blocks=1 00:23:23.706 --rc geninfo_unexecuted_blocks=1 00:23:23.706 00:23:23.706 ' 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:23.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.706 --rc genhtml_branch_coverage=1 00:23:23.706 --rc genhtml_function_coverage=1 00:23:23.706 --rc genhtml_legend=1 00:23:23.706 --rc geninfo_all_blocks=1 00:23:23.706 --rc geninfo_unexecuted_blocks=1 00:23:23.706 00:23:23.706 ' 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:23.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.706 --rc genhtml_branch_coverage=1 00:23:23.706 --rc genhtml_function_coverage=1 00:23:23.706 --rc genhtml_legend=1 00:23:23.706 --rc geninfo_all_blocks=1 00:23:23.706 --rc geninfo_unexecuted_blocks=1 00:23:23.706 00:23:23.706 ' 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:23.706 00:23:23.706 real 0m0.208s 00:23:23.706 user 0m0.123s 00:23:23.706 sys 0m0.099s 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 ************************************ 00:23:23.706 END TEST dma 00:23:23.706 ************************************ 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 ************************************ 00:23:23.706 START TEST nvmf_identify 00:23:23.706 ************************************ 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:23.706 * Looking for test storage... 00:23:23.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:23.706 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:23.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.707 --rc genhtml_branch_coverage=1 00:23:23.707 --rc genhtml_function_coverage=1 00:23:23.707 --rc genhtml_legend=1 00:23:23.707 --rc geninfo_all_blocks=1 00:23:23.707 --rc geninfo_unexecuted_blocks=1 00:23:23.707 00:23:23.707 ' 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:23.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.707 --rc genhtml_branch_coverage=1 00:23:23.707 --rc genhtml_function_coverage=1 00:23:23.707 --rc genhtml_legend=1 00:23:23.707 --rc geninfo_all_blocks=1 00:23:23.707 --rc geninfo_unexecuted_blocks=1 00:23:23.707 00:23:23.707 ' 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:23.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.707 --rc genhtml_branch_coverage=1 00:23:23.707 --rc genhtml_function_coverage=1 00:23:23.707 --rc genhtml_legend=1 00:23:23.707 --rc geninfo_all_blocks=1 00:23:23.707 --rc geninfo_unexecuted_blocks=1 00:23:23.707 00:23:23.707 ' 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:23.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.707 --rc genhtml_branch_coverage=1 00:23:23.707 --rc genhtml_function_coverage=1 00:23:23.707 --rc genhtml_legend=1 00:23:23.707 --rc geninfo_all_blocks=1 00:23:23.707 --rc geninfo_unexecuted_blocks=1 00:23:23.707 00:23:23.707 ' 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:23.707 21:16:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:30.276 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:30.276 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.276 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:30.276 Found net devices under 0000:86:00.0: cvl_0_0 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:30.277 Found net devices under 0000:86:00.1: cvl_0_1 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:30.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:23:30.277 00:23:30.277 --- 10.0.0.2 ping statistics --- 00:23:30.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.277 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:30.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:23:30.277 00:23:30.277 --- 10.0.0.1 ping statistics --- 00:23:30.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.277 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1391110 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1391110 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1391110 ']' 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.277 21:16:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:30.277 [2024-12-05 21:16:37.746864] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:23:30.277 [2024-12-05 21:16:37.746907] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.277 [2024-12-05 21:16:37.825687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:30.277 [2024-12-05 21:16:37.870103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.277 [2024-12-05 21:16:37.870142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.277 [2024-12-05 21:16:37.870150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.277 [2024-12-05 21:16:37.870156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.277 [2024-12-05 21:16:37.870161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.277 [2024-12-05 21:16:37.875389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.277 [2024-12-05 21:16:37.875460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.277 [2024-12-05 21:16:37.875570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.277 [2024-12-05 21:16:37.875570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.536 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.536 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:30.536 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:30.536 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.536 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:30.536 [2024-12-05 21:16:38.582813] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.536 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.536 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:30.536 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:30.536 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:30.536 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:30.536 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.536 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:30.798 Malloc0 00:23:30.798 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.798 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:30.798 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.798 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:30.798 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.799 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:30.799 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.799 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:30.799 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.799 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.799 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.799 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:30.799 [2024-12-05 21:16:38.683044] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.799 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.799 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:30.799 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.799 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:30.799 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.799 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:30.799 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.799 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:30.799 [ 00:23:30.799 { 00:23:30.799 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:30.799 "subtype": "Discovery", 00:23:30.799 "listen_addresses": [ 00:23:30.799 { 00:23:30.799 "trtype": "TCP", 00:23:30.799 "adrfam": "IPv4", 00:23:30.799 "traddr": "10.0.0.2", 00:23:30.799 "trsvcid": "4420" 00:23:30.799 } 00:23:30.799 ], 00:23:30.799 "allow_any_host": true, 00:23:30.799 "hosts": [] 00:23:30.799 }, 00:23:30.799 { 00:23:30.799 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.799 "subtype": "NVMe", 00:23:30.799 "listen_addresses": [ 00:23:30.799 { 00:23:30.799 "trtype": "TCP", 00:23:30.799 "adrfam": "IPv4", 00:23:30.799 "traddr": "10.0.0.2", 00:23:30.799 "trsvcid": "4420" 00:23:30.799 } 00:23:30.799 ], 00:23:30.799 "allow_any_host": true, 00:23:30.799 "hosts": [], 00:23:30.799 "serial_number": "SPDK00000000000001", 00:23:30.799 "model_number": "SPDK bdev Controller", 00:23:30.799 "max_namespaces": 32, 00:23:30.799 "min_cntlid": 1, 00:23:30.799 "max_cntlid": 65519, 00:23:30.799 "namespaces": [ 00:23:30.799 { 00:23:30.799 "nsid": 1, 00:23:30.799 "bdev_name": "Malloc0", 00:23:30.799 "name": "Malloc0", 00:23:30.799 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:30.799 "eui64": "ABCDEF0123456789", 00:23:30.799 "uuid": "98ecbbd9-634b-41c7-8c24-9a290d6971fb" 00:23:30.799 } 00:23:30.799 ] 00:23:30.799 } 00:23:30.799 ] 00:23:30.799 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.799 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:30.799 [2024-12-05 21:16:38.735304] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:23:30.799 [2024-12-05 21:16:38.735347] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391238 ] 00:23:30.799 [2024-12-05 21:16:38.774886] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:30.799 [2024-12-05 21:16:38.774933] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:30.799 [2024-12-05 21:16:38.774938] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:30.799 [2024-12-05 21:16:38.774951] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:30.799 [2024-12-05 21:16:38.774959] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:30.799 [2024-12-05 21:16:38.778693] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:30.799 [2024-12-05 21:16:38.778729] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x16d0690 0 00:23:30.799 [2024-12-05 21:16:38.785447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:30.799 [2024-12-05 21:16:38.785461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:30.799 [2024-12-05 21:16:38.785465] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:30.799 [2024-12-05 21:16:38.785468] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:30.799 [2024-12-05 21:16:38.785501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.799 [2024-12-05 21:16:38.785506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.799 [2024-12-05 21:16:38.785509] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d0690) 00:23:30.799 [2024-12-05 21:16:38.785520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:30.799 [2024-12-05 21:16:38.785534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732100, cid 0, qid 0 00:23:30.799 [2024-12-05 21:16:38.793378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.799 [2024-12-05 21:16:38.793386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.799 [2024-12-05 21:16:38.793389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.799 [2024-12-05 21:16:38.793393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732100) on tqpair=0x16d0690 00:23:30.799 [2024-12-05 21:16:38.793405] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:30.799 [2024-12-05 21:16:38.793412] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:30.799 [2024-12-05 21:16:38.793417] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:30.799 [2024-12-05 21:16:38.793429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.799 [2024-12-05 21:16:38.793432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.799 [2024-12-05 21:16:38.793435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d0690) 00:23:30.799 [2024-12-05 21:16:38.793444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.799 [2024-12-05 21:16:38.793457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732100, cid 0, qid 0 00:23:30.799 [2024-12-05 21:16:38.793545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.799 [2024-12-05 21:16:38.793551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.799 [2024-12-05 21:16:38.793554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.799 [2024-12-05 21:16:38.793557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732100) on tqpair=0x16d0690 00:23:30.799 [2024-12-05 21:16:38.793562] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:30.799 [2024-12-05 21:16:38.793569] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:30.799 [2024-12-05 21:16:38.793576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.799 [2024-12-05 21:16:38.793579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.799 [2024-12-05 21:16:38.793582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d0690) 00:23:30.799 [2024-12-05 21:16:38.793588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.799 [2024-12-05 21:16:38.793598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732100, cid 0, qid 0 00:23:30.799 [2024-12-05 21:16:38.793662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.799 [2024-12-05 21:16:38.793669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.799 [2024-12-05 21:16:38.793672] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.799 [2024-12-05 21:16:38.793675] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732100) on tqpair=0x16d0690 00:23:30.800 [2024-12-05 21:16:38.793680] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:30.800 [2024-12-05 21:16:38.793687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:30.800 [2024-12-05 21:16:38.793693] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.793696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.793700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d0690) 00:23:30.800 [2024-12-05 21:16:38.793705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.800 [2024-12-05 21:16:38.793715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732100, cid 0, qid 0 00:23:30.800 [2024-12-05 21:16:38.793811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.800 [2024-12-05 21:16:38.793817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.800 [2024-12-05 21:16:38.793820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.793824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732100) on tqpair=0x16d0690 00:23:30.800 [2024-12-05 21:16:38.793828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:30.800 [2024-12-05 21:16:38.793837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.793840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.793843] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d0690) 00:23:30.800 [2024-12-05 21:16:38.793849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.800 [2024-12-05 21:16:38.793858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732100, cid 0, qid 0 00:23:30.800 [2024-12-05 21:16:38.793964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.800 [2024-12-05 21:16:38.793970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.800 [2024-12-05 21:16:38.793973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.793976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732100) on tqpair=0x16d0690 00:23:30.800 [2024-12-05 21:16:38.793980] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:30.800 [2024-12-05 21:16:38.793985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:30.800 [2024-12-05 21:16:38.793991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:30.800 [2024-12-05 21:16:38.794101] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:30.800 [2024-12-05 21:16:38.794105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:30.800 [2024-12-05 21:16:38.794113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d0690) 00:23:30.800 [2024-12-05 21:16:38.794125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.800 [2024-12-05 21:16:38.794135] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732100, cid 0, qid 0 00:23:30.800 [2024-12-05 21:16:38.794208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.800 [2024-12-05 21:16:38.794213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.800 [2024-12-05 21:16:38.794216] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732100) on tqpair=0x16d0690 00:23:30.800 [2024-12-05 21:16:38.794223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:30.800 [2024-12-05 21:16:38.794232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d0690) 00:23:30.800 [2024-12-05 21:16:38.794245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.800 [2024-12-05 21:16:38.794255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732100, cid 0, qid 0 00:23:30.800 [2024-12-05 21:16:38.794351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.800 [2024-12-05 21:16:38.794357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.800 [2024-12-05 21:16:38.794360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732100) on tqpair=0x16d0690 00:23:30.800 [2024-12-05 21:16:38.794372] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:30.800 [2024-12-05 21:16:38.794377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:30.800 [2024-12-05 21:16:38.794384] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:30.800 [2024-12-05 21:16:38.794395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:30.800 [2024-12-05 21:16:38.794405] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d0690) 00:23:30.800 [2024-12-05 21:16:38.794414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.800 [2024-12-05 21:16:38.794424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732100, cid 0, qid 0 00:23:30.800 [2024-12-05 21:16:38.794523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:30.800 [2024-12-05 21:16:38.794529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:30.800 [2024-12-05 21:16:38.794532] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794535] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d0690): datao=0, datal=4096, cccid=0 00:23:30.800 [2024-12-05 21:16:38.794539] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1732100) on tqpair(0x16d0690): expected_datao=0, payload_size=4096 00:23:30.800 [2024-12-05 21:16:38.794543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794550] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794553] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.800 [2024-12-05 21:16:38.794607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.800 [2024-12-05 21:16:38.794610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732100) on tqpair=0x16d0690 00:23:30.800 [2024-12-05 21:16:38.794620] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:30.800 [2024-12-05 21:16:38.794627] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:30.800 [2024-12-05 21:16:38.794631] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:30.800 [2024-12-05 21:16:38.794635] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:30.800 [2024-12-05 21:16:38.794639] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:30.800 [2024-12-05 21:16:38.794644] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:30.800 [2024-12-05 21:16:38.794650] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:30.800 [2024-12-05 21:16:38.794656] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794663] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d0690) 00:23:30.800 [2024-12-05 21:16:38.794669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:30.800 [2024-12-05 21:16:38.794678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732100, cid 0, qid 0 00:23:30.800 [2024-12-05 21:16:38.794753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.800 [2024-12-05 21:16:38.794759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.800 [2024-12-05 21:16:38.794762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732100) on tqpair=0x16d0690 00:23:30.800 [2024-12-05 21:16:38.794771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794776] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d0690) 00:23:30.800 [2024-12-05 21:16:38.794785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.800 [2024-12-05 21:16:38.794790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x16d0690) 00:23:30.800 [2024-12-05 21:16:38.794801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.800 [2024-12-05 21:16:38.794806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x16d0690) 00:23:30.800 [2024-12-05 21:16:38.794817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.800 [2024-12-05 21:16:38.794822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.800 [2024-12-05 21:16:38.794829] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d0690) 00:23:30.800 [2024-12-05 21:16:38.794833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.801 [2024-12-05 21:16:38.794837] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:30.801 [2024-12-05 21:16:38.794847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:30.801 [2024-12-05 21:16:38.794853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.794856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d0690) 00:23:30.801 [2024-12-05 21:16:38.794862] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.801 [2024-12-05 21:16:38.794873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732100, cid 0, qid 0 00:23:30.801 [2024-12-05 21:16:38.794877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732280, cid 1, qid 0 00:23:30.801 [2024-12-05 21:16:38.794881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732400, cid 2, qid 0 00:23:30.801 [2024-12-05 21:16:38.794885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732580, cid 3, qid 0 00:23:30.801 [2024-12-05 21:16:38.794890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732700, cid 4, qid 0 00:23:30.801 [2024-12-05 21:16:38.795006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.801 [2024-12-05 21:16:38.795012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.801 [2024-12-05 21:16:38.795015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.795018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732700) on tqpair=0x16d0690 00:23:30.801 [2024-12-05 21:16:38.795022] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:30.801 [2024-12-05 21:16:38.795027] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:30.801 [2024-12-05 21:16:38.795035] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.795039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d0690) 00:23:30.801 [2024-12-05 21:16:38.795046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.801 [2024-12-05 21:16:38.795056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732700, cid 4, qid 0 00:23:30.801 [2024-12-05 21:16:38.795127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:30.801 [2024-12-05 21:16:38.795133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:30.801 [2024-12-05 21:16:38.795136] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.795139] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d0690): datao=0, datal=4096, cccid=4 00:23:30.801 [2024-12-05 21:16:38.795143] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1732700) on tqpair(0x16d0690): expected_datao=0, payload_size=4096 00:23:30.801 [2024-12-05 21:16:38.795147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.795175] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.795179] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.795257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.801 [2024-12-05 21:16:38.795262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.801 [2024-12-05 21:16:38.795266] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.795269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732700) on tqpair=0x16d0690 00:23:30.801 [2024-12-05 21:16:38.795278] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:30.801 [2024-12-05 21:16:38.795298] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.795302] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d0690) 00:23:30.801 [2024-12-05 21:16:38.795307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.801 [2024-12-05 21:16:38.795313] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.795316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.795319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16d0690) 00:23:30.801 [2024-12-05 21:16:38.795325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.801 [2024-12-05 21:16:38.795338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732700, cid 4, qid 0 00:23:30.801 [2024-12-05 21:16:38.795342] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732880, cid 5, qid 0 00:23:30.801 [2024-12-05 21:16:38.795446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:30.801 [2024-12-05 21:16:38.795453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:30.801 [2024-12-05 21:16:38.795456] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.795459] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d0690): datao=0, datal=1024, cccid=4 00:23:30.801 [2024-12-05 21:16:38.795463] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1732700) on tqpair(0x16d0690): expected_datao=0, payload_size=1024 00:23:30.801 [2024-12-05 21:16:38.795466] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.795472] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.795475] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.795480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.801 [2024-12-05 21:16:38.795485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.801 [2024-12-05 21:16:38.795488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.795494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732880) on tqpair=0x16d0690 00:23:30.801 [2024-12-05 21:16:38.837479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.801 [2024-12-05 21:16:38.837490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.801 [2024-12-05 21:16:38.837494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.837497] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732700) on tqpair=0x16d0690 00:23:30.801 [2024-12-05 21:16:38.837508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.837512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d0690) 00:23:30.801 [2024-12-05 21:16:38.837519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.801 [2024-12-05 21:16:38.837536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732700, cid 4, qid 0 00:23:30.801 [2024-12-05 21:16:38.837610] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:30.801 [2024-12-05 21:16:38.837617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:30.801 [2024-12-05 21:16:38.837620] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.837624] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d0690): datao=0, datal=3072, cccid=4 00:23:30.801 [2024-12-05 21:16:38.837628] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1732700) on tqpair(0x16d0690): expected_datao=0, payload_size=3072 00:23:30.801 [2024-12-05 21:16:38.837632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.837638] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.837641] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.837664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.801 [2024-12-05 21:16:38.837670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.801 [2024-12-05 21:16:38.837672] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.837676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732700) on tqpair=0x16d0690 00:23:30.801 [2024-12-05 21:16:38.837683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.837687] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d0690) 00:23:30.801 [2024-12-05 21:16:38.837693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.801 [2024-12-05 21:16:38.837707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732700, cid 4, qid 0 00:23:30.801 [2024-12-05 21:16:38.837775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:30.801 [2024-12-05 21:16:38.837781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:30.801 [2024-12-05 21:16:38.837784] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.837787] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d0690): datao=0, datal=8, cccid=4 00:23:30.801 [2024-12-05 21:16:38.837791] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1732700) on tqpair(0x16d0690): expected_datao=0, payload_size=8 00:23:30.801 [2024-12-05 21:16:38.837794] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.837800] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.837803] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.878513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.801 [2024-12-05 21:16:38.878523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.801 [2024-12-05 21:16:38.878526] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.801 [2024-12-05 21:16:38.878530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732700) on tqpair=0x16d0690 00:23:30.801 ===================================================== 00:23:30.801 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:30.801 ===================================================== 00:23:30.801 Controller Capabilities/Features 00:23:30.801 ================================ 00:23:30.801 Vendor ID: 0000 00:23:30.801 Subsystem Vendor ID: 0000 00:23:30.801 Serial Number: .................... 00:23:30.801 Model Number: ........................................ 00:23:30.801 Firmware Version: 25.01 00:23:30.801 Recommended Arb Burst: 0 00:23:30.801 IEEE OUI Identifier: 00 00 00 00:23:30.801 Multi-path I/O 00:23:30.801 May have multiple subsystem ports: No 00:23:30.801 May have multiple controllers: No 00:23:30.801 Associated with SR-IOV VF: No 00:23:30.801 Max Data Transfer Size: 131072 00:23:30.801 Max Number of Namespaces: 0 00:23:30.801 Max Number of I/O Queues: 1024 00:23:30.801 NVMe Specification Version (VS): 1.3 00:23:30.801 NVMe Specification Version (Identify): 1.3 00:23:30.801 Maximum Queue Entries: 128 00:23:30.801 Contiguous Queues Required: Yes 00:23:30.801 Arbitration Mechanisms Supported 00:23:30.801 Weighted Round Robin: Not Supported 00:23:30.802 Vendor Specific: Not Supported 00:23:30.802 Reset Timeout: 15000 ms 00:23:30.802 Doorbell Stride: 4 bytes 00:23:30.802 NVM Subsystem Reset: Not Supported 00:23:30.802 Command Sets Supported 00:23:30.802 NVM Command Set: Supported 00:23:30.802 Boot Partition: Not Supported 00:23:30.802 Memory Page Size Minimum: 4096 bytes 00:23:30.802 Memory Page Size Maximum: 4096 bytes 00:23:30.802 Persistent Memory Region: Not Supported 00:23:30.802 Optional Asynchronous Events Supported 00:23:30.802 Namespace Attribute Notices: Not Supported 00:23:30.802 Firmware Activation Notices: Not Supported 00:23:30.802 ANA Change Notices: Not Supported 00:23:30.802 PLE Aggregate Log Change Notices: Not Supported 00:23:30.802 LBA Status Info Alert Notices: Not Supported 00:23:30.802 EGE Aggregate Log Change Notices: Not Supported 00:23:30.802 Normal NVM Subsystem Shutdown event: Not Supported 00:23:30.802 Zone Descriptor Change Notices: Not Supported 00:23:30.802 Discovery Log Change Notices: Supported 00:23:30.802 Controller Attributes 00:23:30.802 128-bit Host Identifier: Not Supported 00:23:30.802 Non-Operational Permissive Mode: Not Supported 00:23:30.802 NVM Sets: Not Supported 00:23:30.802 Read Recovery Levels: Not Supported 00:23:30.802 Endurance Groups: Not Supported 00:23:30.802 Predictable Latency Mode: Not Supported 00:23:30.802 Traffic Based Keep ALive: Not Supported 00:23:30.802 Namespace Granularity: Not Supported 00:23:30.802 SQ Associations: Not Supported 00:23:30.802 UUID List: Not Supported 00:23:30.802 Multi-Domain Subsystem: Not Supported 00:23:30.802 Fixed Capacity Management: Not Supported 00:23:30.802 Variable Capacity Management: Not Supported 00:23:30.802 Delete Endurance Group: Not Supported 00:23:30.802 Delete NVM Set: Not Supported 00:23:30.802 Extended LBA Formats Supported: Not Supported 00:23:30.802 Flexible Data Placement Supported: Not Supported 00:23:30.802 00:23:30.802 Controller Memory Buffer Support 00:23:30.802 ================================ 00:23:30.802 Supported: No 00:23:30.802 00:23:30.802 Persistent Memory Region Support 00:23:30.802 ================================ 00:23:30.802 Supported: No 00:23:30.802 00:23:30.802 Admin Command Set Attributes 00:23:30.802 ============================ 00:23:30.802 Security Send/Receive: Not Supported 00:23:30.802 Format NVM: Not Supported 00:23:30.802 Firmware Activate/Download: Not Supported 00:23:30.802 Namespace Management: Not Supported 00:23:30.802 Device Self-Test: Not Supported 00:23:30.802 Directives: Not Supported 00:23:30.802 NVMe-MI: Not Supported 00:23:30.802 Virtualization Management: Not Supported 00:23:30.802 Doorbell Buffer Config: Not Supported 00:23:30.802 Get LBA Status Capability: Not Supported 00:23:30.802 Command & Feature Lockdown Capability: Not Supported 00:23:30.802 Abort Command Limit: 1 00:23:30.802 Async Event Request Limit: 4 00:23:30.802 Number of Firmware Slots: N/A 00:23:30.802 Firmware Slot 1 Read-Only: N/A 00:23:30.802 Firmware Activation Without Reset: N/A 00:23:30.802 Multiple Update Detection Support: N/A 00:23:30.802 Firmware Update Granularity: No Information Provided 00:23:30.802 Per-Namespace SMART Log: No 00:23:30.802 Asymmetric Namespace Access Log Page: Not Supported 00:23:30.802 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:30.802 Command Effects Log Page: Not Supported 00:23:30.802 Get Log Page Extended Data: Supported 00:23:30.802 Telemetry Log Pages: Not Supported 00:23:30.802 Persistent Event Log Pages: Not Supported 00:23:30.802 Supported Log Pages Log Page: May Support 00:23:30.802 Commands Supported & Effects Log Page: Not Supported 00:23:30.802 Feature Identifiers & Effects Log Page:May Support 00:23:30.802 NVMe-MI Commands & Effects Log Page: May Support 00:23:30.802 Data Area 4 for Telemetry Log: Not Supported 00:23:30.802 Error Log Page Entries Supported: 128 00:23:30.802 Keep Alive: Not Supported 00:23:30.802 00:23:30.802 NVM Command Set Attributes 00:23:30.802 ========================== 00:23:30.802 Submission Queue Entry Size 00:23:30.802 Max: 1 00:23:30.802 Min: 1 00:23:30.802 Completion Queue Entry Size 00:23:30.802 Max: 1 00:23:30.802 Min: 1 00:23:30.802 Number of Namespaces: 0 00:23:30.802 Compare Command: Not Supported 00:23:30.802 Write Uncorrectable Command: Not Supported 00:23:30.802 Dataset Management Command: Not Supported 00:23:30.802 Write Zeroes Command: Not Supported 00:23:30.802 Set Features Save Field: Not Supported 00:23:30.802 Reservations: Not Supported 00:23:30.802 Timestamp: Not Supported 00:23:30.802 Copy: Not Supported 00:23:30.802 Volatile Write Cache: Not Present 00:23:30.802 Atomic Write Unit (Normal): 1 00:23:30.802 Atomic Write Unit (PFail): 1 00:23:30.802 Atomic Compare & Write Unit: 1 00:23:30.802 Fused Compare & Write: Supported 00:23:30.802 Scatter-Gather List 00:23:30.802 SGL Command Set: Supported 00:23:30.802 SGL Keyed: Supported 00:23:30.802 SGL Bit Bucket Descriptor: Not Supported 00:23:30.802 SGL Metadata Pointer: Not Supported 00:23:30.802 Oversized SGL: Not Supported 00:23:30.802 SGL Metadata Address: Not Supported 00:23:30.802 SGL Offset: Supported 00:23:30.802 Transport SGL Data Block: Not Supported 00:23:30.802 Replay Protected Memory Block: Not Supported 00:23:30.802 00:23:30.802 Firmware Slot Information 00:23:30.802 ========================= 00:23:30.802 Active slot: 0 00:23:30.802 00:23:30.802 00:23:30.802 Error Log 00:23:30.802 ========= 00:23:30.802 00:23:30.802 Active Namespaces 00:23:30.802 ================= 00:23:30.802 Discovery Log Page 00:23:30.802 ================== 00:23:30.802 Generation Counter: 2 00:23:30.802 Number of Records: 2 00:23:30.802 Record Format: 0 00:23:30.802 00:23:30.802 Discovery Log Entry 0 00:23:30.802 ---------------------- 00:23:30.802 Transport Type: 3 (TCP) 00:23:30.802 Address Family: 1 (IPv4) 00:23:30.802 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:30.802 Entry Flags: 00:23:30.802 Duplicate Returned Information: 1 00:23:30.802 Explicit Persistent Connection Support for Discovery: 1 00:23:30.802 Transport Requirements: 00:23:30.802 Secure Channel: Not Required 00:23:30.802 Port ID: 0 (0x0000) 00:23:30.802 Controller ID: 65535 (0xffff) 00:23:30.802 Admin Max SQ Size: 128 00:23:30.802 Transport Service Identifier: 4420 00:23:30.802 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:30.802 Transport Address: 10.0.0.2 00:23:30.802 Discovery Log Entry 1 00:23:30.802 ---------------------- 00:23:30.802 Transport Type: 3 (TCP) 00:23:30.802 Address Family: 1 (IPv4) 00:23:30.802 Subsystem Type: 2 (NVM Subsystem) 00:23:30.802 Entry Flags: 00:23:30.802 Duplicate Returned Information: 0 00:23:30.802 Explicit Persistent Connection Support for Discovery: 0 00:23:30.802 Transport Requirements: 00:23:30.802 Secure Channel: Not Required 00:23:30.802 Port ID: 0 (0x0000) 00:23:30.802 Controller ID: 65535 (0xffff) 00:23:30.802 Admin Max SQ Size: 128 00:23:30.802 Transport Service Identifier: 4420 00:23:30.802 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:30.802 Transport Address: 10.0.0.2 [2024-12-05 21:16:38.878611] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:30.802 [2024-12-05 21:16:38.878622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732100) on tqpair=0x16d0690 00:23:30.802 [2024-12-05 21:16:38.878628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.802 [2024-12-05 21:16:38.878633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732280) on tqpair=0x16d0690 00:23:30.802 [2024-12-05 21:16:38.878637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.802 [2024-12-05 21:16:38.878641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732400) on tqpair=0x16d0690 00:23:30.802 [2024-12-05 21:16:38.878645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.802 [2024-12-05 21:16:38.878649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732580) on tqpair=0x16d0690 00:23:30.802 [2024-12-05 21:16:38.878653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.802 [2024-12-05 21:16:38.878663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.802 [2024-12-05 21:16:38.878667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.802 [2024-12-05 21:16:38.878670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d0690) 00:23:30.802 [2024-12-05 21:16:38.878676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.802 [2024-12-05 21:16:38.878689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732580, cid 3, qid 0 00:23:30.802 [2024-12-05 21:16:38.878747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.802 [2024-12-05 21:16:38.878753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.803 [2024-12-05 21:16:38.878756] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.878760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732580) on tqpair=0x16d0690 00:23:30.803 [2024-12-05 21:16:38.878765] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.878768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.878772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d0690) 00:23:30.803 [2024-12-05 21:16:38.878777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.803 [2024-12-05 21:16:38.878790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732580, cid 3, qid 0 00:23:30.803 [2024-12-05 21:16:38.878859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.803 [2024-12-05 21:16:38.878864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.803 [2024-12-05 21:16:38.878867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.878871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732580) on tqpair=0x16d0690 00:23:30.803 [2024-12-05 21:16:38.878875] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:30.803 [2024-12-05 21:16:38.878879] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:30.803 [2024-12-05 21:16:38.878887] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.878890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.878894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d0690) 00:23:30.803 [2024-12-05 21:16:38.878899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.803 [2024-12-05 21:16:38.878908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732580, cid 3, qid 0 00:23:30.803 [2024-12-05 21:16:38.878969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.803 [2024-12-05 21:16:38.878974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.803 [2024-12-05 21:16:38.878977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.878981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732580) on tqpair=0x16d0690 00:23:30.803 [2024-12-05 21:16:38.878989] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.878993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.878996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d0690) 00:23:30.803 [2024-12-05 21:16:38.879001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.803 [2024-12-05 21:16:38.879011] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732580, cid 3, qid 0 00:23:30.803 [2024-12-05 21:16:38.879078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.803 [2024-12-05 21:16:38.879084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.803 [2024-12-05 21:16:38.879087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732580) on tqpair=0x16d0690 00:23:30.803 [2024-12-05 21:16:38.879098] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d0690) 00:23:30.803 [2024-12-05 21:16:38.879110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.803 [2024-12-05 21:16:38.879120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732580, cid 3, qid 0 00:23:30.803 [2024-12-05 21:16:38.879200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.803 [2024-12-05 21:16:38.879205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.803 [2024-12-05 21:16:38.879208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732580) on tqpair=0x16d0690 00:23:30.803 [2024-12-05 21:16:38.879220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d0690) 00:23:30.803 [2024-12-05 21:16:38.879232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.803 [2024-12-05 21:16:38.879242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732580, cid 3, qid 0 00:23:30.803 [2024-12-05 21:16:38.879303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.803 [2024-12-05 21:16:38.879309] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.803 [2024-12-05 21:16:38.879312] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732580) on tqpair=0x16d0690 00:23:30.803 [2024-12-05 21:16:38.879323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d0690) 00:23:30.803 [2024-12-05 21:16:38.879335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.803 [2024-12-05 21:16:38.879344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732580, cid 3, qid 0 00:23:30.803 [2024-12-05 21:16:38.879413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.803 [2024-12-05 21:16:38.879419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.803 [2024-12-05 21:16:38.879422] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732580) on tqpair=0x16d0690 00:23:30.803 [2024-12-05 21:16:38.879433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d0690) 00:23:30.803 [2024-12-05 21:16:38.879446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.803 [2024-12-05 21:16:38.879455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732580, cid 3, qid 0 00:23:30.803 [2024-12-05 21:16:38.879521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.803 [2024-12-05 21:16:38.879526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.803 [2024-12-05 21:16:38.879529] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732580) on tqpair=0x16d0690 00:23:30.803 [2024-12-05 21:16:38.879541] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d0690) 00:23:30.803 [2024-12-05 21:16:38.879553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.803 [2024-12-05 21:16:38.879562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732580, cid 3, qid 0 00:23:30.803 [2024-12-05 21:16:38.879621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.803 [2024-12-05 21:16:38.879626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.803 [2024-12-05 21:16:38.879629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732580) on tqpair=0x16d0690 00:23:30.803 [2024-12-05 21:16:38.879640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d0690) 00:23:30.803 [2024-12-05 21:16:38.879652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.803 [2024-12-05 21:16:38.879662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732580, cid 3, qid 0 00:23:30.803 [2024-12-05 21:16:38.879732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.803 [2024-12-05 21:16:38.879738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.803 [2024-12-05 21:16:38.879741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732580) on tqpair=0x16d0690 00:23:30.803 [2024-12-05 21:16:38.879752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.803 [2024-12-05 21:16:38.879758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d0690) 00:23:30.804 [2024-12-05 21:16:38.879764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.804 [2024-12-05 21:16:38.879773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732580, cid 3, qid 0 00:23:30.804 [2024-12-05 21:16:38.879839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.804 [2024-12-05 21:16:38.879846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.804 [2024-12-05 21:16:38.879849] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.879853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732580) on tqpair=0x16d0690 00:23:30.804 [2024-12-05 21:16:38.879861] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.879864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.879867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d0690) 00:23:30.804 [2024-12-05 21:16:38.879873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.804 [2024-12-05 21:16:38.879882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732580, cid 3, qid 0 00:23:30.804 [2024-12-05 21:16:38.879948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.804 [2024-12-05 21:16:38.879953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.804 [2024-12-05 21:16:38.879956] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.879959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732580) on tqpair=0x16d0690 00:23:30.804 [2024-12-05 21:16:38.879967] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.879971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.879974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d0690) 00:23:30.804 [2024-12-05 21:16:38.879979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.804 [2024-12-05 21:16:38.879988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732580, cid 3, qid 0 00:23:30.804 [2024-12-05 21:16:38.880048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.804 [2024-12-05 21:16:38.880054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.804 [2024-12-05 21:16:38.880057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.880060] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732580) on tqpair=0x16d0690 00:23:30.804 [2024-12-05 21:16:38.880068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.880071] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.880074] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d0690) 00:23:30.804 [2024-12-05 21:16:38.880080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.804 [2024-12-05 21:16:38.880088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732580, cid 3, qid 0 00:23:30.804 [2024-12-05 21:16:38.880158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.804 [2024-12-05 21:16:38.880164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.804 [2024-12-05 21:16:38.880167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.880170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732580) on tqpair=0x16d0690 00:23:30.804 [2024-12-05 21:16:38.880178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.880181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.880185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d0690) 00:23:30.804 [2024-12-05 21:16:38.880190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.804 [2024-12-05 21:16:38.880199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732580, cid 3, qid 0 00:23:30.804 [2024-12-05 21:16:38.880267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.804 [2024-12-05 21:16:38.880273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.804 [2024-12-05 21:16:38.880276] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.880281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732580) on tqpair=0x16d0690 00:23:30.804 [2024-12-05 21:16:38.880288] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.880292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.880295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d0690) 00:23:30.804 [2024-12-05 21:16:38.880300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.804 [2024-12-05 21:16:38.880310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732580, cid 3, qid 0 00:23:30.804 [2024-12-05 21:16:38.884378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.804 [2024-12-05 21:16:38.884388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.804 [2024-12-05 21:16:38.884391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.884394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732580) on tqpair=0x16d0690 00:23:30.804 [2024-12-05 21:16:38.884404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.884408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.884411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d0690) 00:23:30.804 [2024-12-05 21:16:38.884417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.804 [2024-12-05 21:16:38.884429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1732580, cid 3, qid 0 00:23:30.804 [2024-12-05 21:16:38.884583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.804 [2024-12-05 21:16:38.884589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.804 [2024-12-05 21:16:38.884592] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.804 [2024-12-05 21:16:38.884595] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1732580) on tqpair=0x16d0690 00:23:30.804 [2024-12-05 21:16:38.884603] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:23:30.804 00:23:30.804 21:16:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:31.068 [2024-12-05 21:16:38.924409] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:23:31.068 [2024-12-05 21:16:38.924446] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391344 ] 00:23:31.068 [2024-12-05 21:16:38.961638] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:31.068 [2024-12-05 21:16:38.961678] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:31.068 [2024-12-05 21:16:38.961683] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:31.068 [2024-12-05 21:16:38.961695] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:31.068 [2024-12-05 21:16:38.961703] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:31.068 [2024-12-05 21:16:38.965565] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:31.068 [2024-12-05 21:16:38.965593] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1469690 0 00:23:31.068 [2024-12-05 21:16:38.973382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:31.068 [2024-12-05 21:16:38.973396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:31.068 [2024-12-05 21:16:38.973400] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:31.068 [2024-12-05 21:16:38.973403] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:31.068 [2024-12-05 21:16:38.973428] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.973433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.973437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1469690) 00:23:31.068 [2024-12-05 21:16:38.973446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:31.068 [2024-12-05 21:16:38.973462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb100, cid 0, qid 0 00:23:31.068 [2024-12-05 21:16:38.980376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.068 [2024-12-05 21:16:38.980384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.068 [2024-12-05 21:16:38.980387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.980390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb100) on tqpair=0x1469690 00:23:31.068 [2024-12-05 21:16:38.980401] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:31.068 [2024-12-05 21:16:38.980407] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:31.068 [2024-12-05 21:16:38.980412] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:31.068 [2024-12-05 21:16:38.980422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.980426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.980429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1469690) 00:23:31.068 [2024-12-05 21:16:38.980436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.068 [2024-12-05 21:16:38.980448] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb100, cid 0, qid 0 00:23:31.068 [2024-12-05 21:16:38.980604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.068 [2024-12-05 21:16:38.980610] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.068 [2024-12-05 21:16:38.980613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.980616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb100) on tqpair=0x1469690 00:23:31.068 [2024-12-05 21:16:38.980620] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:31.068 [2024-12-05 21:16:38.980627] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:31.068 [2024-12-05 21:16:38.980633] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.980637] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.980640] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1469690) 00:23:31.068 [2024-12-05 21:16:38.980645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.068 [2024-12-05 21:16:38.980656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb100, cid 0, qid 0 00:23:31.068 [2024-12-05 21:16:38.980717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.068 [2024-12-05 21:16:38.980722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.068 [2024-12-05 21:16:38.980726] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.980729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb100) on tqpair=0x1469690 00:23:31.068 [2024-12-05 21:16:38.980735] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:31.068 [2024-12-05 21:16:38.980743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:31.068 [2024-12-05 21:16:38.980749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.980752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.980755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1469690) 00:23:31.068 [2024-12-05 21:16:38.980761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.068 [2024-12-05 21:16:38.980770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb100, cid 0, qid 0 00:23:31.068 [2024-12-05 21:16:38.980829] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.068 [2024-12-05 21:16:38.980835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.068 [2024-12-05 21:16:38.980838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.980841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb100) on tqpair=0x1469690 00:23:31.068 [2024-12-05 21:16:38.980845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:31.068 [2024-12-05 21:16:38.980853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.980857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.980860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1469690) 00:23:31.068 [2024-12-05 21:16:38.980866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.068 [2024-12-05 21:16:38.980875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb100, cid 0, qid 0 00:23:31.068 [2024-12-05 21:16:38.980950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.068 [2024-12-05 21:16:38.980956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.068 [2024-12-05 21:16:38.980959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.980962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb100) on tqpair=0x1469690 00:23:31.068 [2024-12-05 21:16:38.980966] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:31.068 [2024-12-05 21:16:38.980970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:31.068 [2024-12-05 21:16:38.980978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:31.068 [2024-12-05 21:16:38.981085] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:31.068 [2024-12-05 21:16:38.981089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:31.068 [2024-12-05 21:16:38.981095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.981099] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.981102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1469690) 00:23:31.068 [2024-12-05 21:16:38.981108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.068 [2024-12-05 21:16:38.981117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb100, cid 0, qid 0 00:23:31.068 [2024-12-05 21:16:38.981179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.068 [2024-12-05 21:16:38.981184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.068 [2024-12-05 21:16:38.981191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.981194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb100) on tqpair=0x1469690 00:23:31.068 [2024-12-05 21:16:38.981198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:31.068 [2024-12-05 21:16:38.981206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.981210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.981213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1469690) 00:23:31.068 [2024-12-05 21:16:38.981218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.068 [2024-12-05 21:16:38.981227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb100, cid 0, qid 0 00:23:31.068 [2024-12-05 21:16:38.981289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.068 [2024-12-05 21:16:38.981294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.068 [2024-12-05 21:16:38.981297] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.068 [2024-12-05 21:16:38.981300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb100) on tqpair=0x1469690 00:23:31.068 [2024-12-05 21:16:38.981304] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:31.068 [2024-12-05 21:16:38.981308] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:31.068 [2024-12-05 21:16:38.981315] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:31.069 [2024-12-05 21:16:38.981322] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:31.069 [2024-12-05 21:16:38.981330] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981333] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1469690) 00:23:31.069 [2024-12-05 21:16:38.981339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.069 [2024-12-05 21:16:38.981349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb100, cid 0, qid 0 00:23:31.069 [2024-12-05 21:16:38.981441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:31.069 [2024-12-05 21:16:38.981448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:31.069 [2024-12-05 21:16:38.981451] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981454] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1469690): datao=0, datal=4096, cccid=0 00:23:31.069 [2024-12-05 21:16:38.981458] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cb100) on tqpair(0x1469690): expected_datao=0, payload_size=4096 00:23:31.069 [2024-12-05 21:16:38.981462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981478] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981482] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.069 [2024-12-05 21:16:38.981524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.069 [2024-12-05 21:16:38.981528] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb100) on tqpair=0x1469690 00:23:31.069 [2024-12-05 21:16:38.981537] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:31.069 [2024-12-05 21:16:38.981545] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:31.069 [2024-12-05 21:16:38.981549] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:31.069 [2024-12-05 21:16:38.981553] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:31.069 [2024-12-05 21:16:38.981557] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:31.069 [2024-12-05 21:16:38.981561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:31.069 [2024-12-05 21:16:38.981568] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:31.069 [2024-12-05 21:16:38.981574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981577] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1469690) 00:23:31.069 [2024-12-05 21:16:38.981586] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:31.069 [2024-12-05 21:16:38.981596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb100, cid 0, qid 0 00:23:31.069 [2024-12-05 21:16:38.981660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.069 [2024-12-05 21:16:38.981665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.069 [2024-12-05 21:16:38.981668] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981672] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb100) on tqpair=0x1469690 00:23:31.069 [2024-12-05 21:16:38.981677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1469690) 00:23:31.069 [2024-12-05 21:16:38.981689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.069 [2024-12-05 21:16:38.981694] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1469690) 00:23:31.069 [2024-12-05 21:16:38.981705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.069 [2024-12-05 21:16:38.981711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981717] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1469690) 00:23:31.069 [2024-12-05 21:16:38.981722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.069 [2024-12-05 21:16:38.981727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.069 [2024-12-05 21:16:38.981738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.069 [2024-12-05 21:16:38.981742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:31.069 [2024-12-05 21:16:38.981752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:31.069 [2024-12-05 21:16:38.981759] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1469690) 00:23:31.069 [2024-12-05 21:16:38.981767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.069 [2024-12-05 21:16:38.981779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb100, cid 0, qid 0 00:23:31.069 [2024-12-05 21:16:38.981783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb280, cid 1, qid 0 00:23:31.069 [2024-12-05 21:16:38.981787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb400, cid 2, qid 0 00:23:31.069 [2024-12-05 21:16:38.981791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.069 [2024-12-05 21:16:38.981795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb700, cid 4, qid 0 00:23:31.069 [2024-12-05 21:16:38.981889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.069 [2024-12-05 21:16:38.981895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.069 [2024-12-05 21:16:38.981898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb700) on tqpair=0x1469690 00:23:31.069 [2024-12-05 21:16:38.981905] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:31.069 [2024-12-05 21:16:38.981909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:31.069 [2024-12-05 21:16:38.981916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:31.069 [2024-12-05 21:16:38.981922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:31.069 [2024-12-05 21:16:38.981927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.981934] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1469690) 00:23:31.069 [2024-12-05 21:16:38.981939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:31.069 [2024-12-05 21:16:38.981949] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb700, cid 4, qid 0 00:23:31.069 [2024-12-05 21:16:38.982008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.069 [2024-12-05 21:16:38.982014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.069 [2024-12-05 21:16:38.982017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.982020] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb700) on tqpair=0x1469690 00:23:31.069 [2024-12-05 21:16:38.982072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:31.069 [2024-12-05 21:16:38.982082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:31.069 [2024-12-05 21:16:38.982089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.982092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1469690) 00:23:31.069 [2024-12-05 21:16:38.982097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.069 [2024-12-05 21:16:38.982107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb700, cid 4, qid 0 00:23:31.069 [2024-12-05 21:16:38.982185] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:31.069 [2024-12-05 21:16:38.982192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:31.069 [2024-12-05 21:16:38.982196] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.982199] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1469690): datao=0, datal=4096, cccid=4 00:23:31.069 [2024-12-05 21:16:38.982202] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cb700) on tqpair(0x1469690): expected_datao=0, payload_size=4096 00:23:31.069 [2024-12-05 21:16:38.982206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.982212] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.982215] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.982225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.069 [2024-12-05 21:16:38.982230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.069 [2024-12-05 21:16:38.982233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.069 [2024-12-05 21:16:38.982236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb700) on tqpair=0x1469690 00:23:31.069 [2024-12-05 21:16:38.982244] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:31.069 [2024-12-05 21:16:38.982253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:31.069 [2024-12-05 21:16:38.982261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:31.069 [2024-12-05 21:16:38.982267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:38.982270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1469690) 00:23:31.070 [2024-12-05 21:16:38.982275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.070 [2024-12-05 21:16:38.982286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb700, cid 4, qid 0 00:23:31.070 [2024-12-05 21:16:38.982365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:31.070 [2024-12-05 21:16:38.982375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:31.070 [2024-12-05 21:16:38.982378] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:38.982381] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1469690): datao=0, datal=4096, cccid=4 00:23:31.070 [2024-12-05 21:16:38.982385] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cb700) on tqpair(0x1469690): expected_datao=0, payload_size=4096 00:23:31.070 [2024-12-05 21:16:38.982389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:38.982394] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:38.982398] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:38.982417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.070 [2024-12-05 21:16:38.982422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.070 [2024-12-05 21:16:38.982425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:38.982429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb700) on tqpair=0x1469690 00:23:31.070 [2024-12-05 21:16:38.982439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:31.070 [2024-12-05 21:16:38.982448] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:31.070 [2024-12-05 21:16:38.982454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:38.982457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1469690) 00:23:31.070 [2024-12-05 21:16:38.982463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.070 [2024-12-05 21:16:38.982475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb700, cid 4, qid 0 00:23:31.070 [2024-12-05 21:16:38.982546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:31.070 [2024-12-05 21:16:38.982552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:31.070 [2024-12-05 21:16:38.982555] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:38.982558] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1469690): datao=0, datal=4096, cccid=4 00:23:31.070 [2024-12-05 21:16:38.982561] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cb700) on tqpair(0x1469690): expected_datao=0, payload_size=4096 00:23:31.070 [2024-12-05 21:16:38.982565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:38.982577] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:38.982581] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:39.023519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.070 [2024-12-05 21:16:39.023532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.070 [2024-12-05 21:16:39.023535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:39.023538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb700) on tqpair=0x1469690 00:23:31.070 [2024-12-05 21:16:39.023547] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:31.070 [2024-12-05 21:16:39.023555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:31.070 [2024-12-05 21:16:39.023564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:31.070 [2024-12-05 21:16:39.023569] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:31.070 [2024-12-05 21:16:39.023574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:31.070 [2024-12-05 21:16:39.023579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:31.070 [2024-12-05 21:16:39.023583] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:31.070 [2024-12-05 21:16:39.023587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:31.070 [2024-12-05 21:16:39.023592] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:31.070 [2024-12-05 21:16:39.023606] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:39.023609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1469690) 00:23:31.070 [2024-12-05 21:16:39.023616] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.070 [2024-12-05 21:16:39.023622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:39.023626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:39.023628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1469690) 00:23:31.070 [2024-12-05 21:16:39.023634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.070 [2024-12-05 21:16:39.023648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb700, cid 4, qid 0 00:23:31.070 [2024-12-05 21:16:39.023653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb880, cid 5, qid 0 00:23:31.070 [2024-12-05 21:16:39.023743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.070 [2024-12-05 21:16:39.023748] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.070 [2024-12-05 21:16:39.023751] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:39.023754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb700) on tqpair=0x1469690 00:23:31.070 [2024-12-05 21:16:39.023760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.070 [2024-12-05 21:16:39.023765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.070 [2024-12-05 21:16:39.023768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:39.023771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb880) on tqpair=0x1469690 00:23:31.070 [2024-12-05 21:16:39.023778] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:39.023782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1469690) 00:23:31.070 [2024-12-05 21:16:39.023787] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.070 [2024-12-05 21:16:39.023798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb880, cid 5, qid 0 00:23:31.070 [2024-12-05 21:16:39.023863] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.070 [2024-12-05 21:16:39.023869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.070 [2024-12-05 21:16:39.023872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:39.023875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb880) on tqpair=0x1469690 00:23:31.070 [2024-12-05 21:16:39.023883] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:39.023886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1469690) 00:23:31.070 [2024-12-05 21:16:39.023892] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.070 [2024-12-05 21:16:39.023901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb880, cid 5, qid 0 00:23:31.070 [2024-12-05 21:16:39.023971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.070 [2024-12-05 21:16:39.023977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.070 [2024-12-05 21:16:39.023980] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:39.023983] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb880) on tqpair=0x1469690 00:23:31.070 [2024-12-05 21:16:39.023990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:39.023994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1469690) 00:23:31.070 [2024-12-05 21:16:39.023999] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.070 [2024-12-05 21:16:39.024009] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb880, cid 5, qid 0 00:23:31.070 [2024-12-05 21:16:39.024069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.070 [2024-12-05 21:16:39.024075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.070 [2024-12-05 21:16:39.024078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:39.024081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb880) on tqpair=0x1469690 00:23:31.070 [2024-12-05 21:16:39.024093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:39.024097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1469690) 00:23:31.070 [2024-12-05 21:16:39.024103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.070 [2024-12-05 21:16:39.024109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:39.024114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1469690) 00:23:31.070 [2024-12-05 21:16:39.024119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.070 [2024-12-05 21:16:39.024125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:39.024128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1469690) 00:23:31.070 [2024-12-05 21:16:39.024134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.070 [2024-12-05 21:16:39.024140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.070 [2024-12-05 21:16:39.024143] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1469690) 00:23:31.070 [2024-12-05 21:16:39.024148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.070 [2024-12-05 21:16:39.024159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb880, cid 5, qid 0 00:23:31.070 [2024-12-05 21:16:39.024163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb700, cid 4, qid 0 00:23:31.070 [2024-12-05 21:16:39.024167] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cba00, cid 6, qid 0 00:23:31.071 [2024-12-05 21:16:39.024171] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cbb80, cid 7, qid 0 00:23:31.071 [2024-12-05 21:16:39.024304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:31.071 [2024-12-05 21:16:39.024310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:31.071 [2024-12-05 21:16:39.024313] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.024317] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1469690): datao=0, datal=8192, cccid=5 00:23:31.071 [2024-12-05 21:16:39.024321] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cb880) on tqpair(0x1469690): expected_datao=0, payload_size=8192 00:23:31.071 [2024-12-05 21:16:39.024325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.024339] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.024343] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.024347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:31.071 [2024-12-05 21:16:39.024352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:31.071 [2024-12-05 21:16:39.024355] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.024358] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1469690): datao=0, datal=512, cccid=4 00:23:31.071 [2024-12-05 21:16:39.024362] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cb700) on tqpair(0x1469690): expected_datao=0, payload_size=512 00:23:31.071 [2024-12-05 21:16:39.024366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.028376] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.028380] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.028385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:31.071 [2024-12-05 21:16:39.028389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:31.071 [2024-12-05 21:16:39.028392] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.028395] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1469690): datao=0, datal=512, cccid=6 00:23:31.071 [2024-12-05 21:16:39.028399] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cba00) on tqpair(0x1469690): expected_datao=0, payload_size=512 00:23:31.071 [2024-12-05 21:16:39.028403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.028411] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.028414] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.028418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:31.071 [2024-12-05 21:16:39.028423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:31.071 [2024-12-05 21:16:39.028426] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.028429] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1469690): datao=0, datal=4096, cccid=7 00:23:31.071 [2024-12-05 21:16:39.028433] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cbb80) on tqpair(0x1469690): expected_datao=0, payload_size=4096 00:23:31.071 [2024-12-05 21:16:39.028437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.028442] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.028445] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.028453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.071 [2024-12-05 21:16:39.028457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.071 [2024-12-05 21:16:39.028460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.028464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb880) on tqpair=0x1469690 00:23:31.071 [2024-12-05 21:16:39.028474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.071 [2024-12-05 21:16:39.028479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.071 [2024-12-05 21:16:39.028482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.028485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb700) on tqpair=0x1469690 00:23:31.071 [2024-12-05 21:16:39.028493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.071 [2024-12-05 21:16:39.028498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.071 [2024-12-05 21:16:39.028501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.028504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cba00) on tqpair=0x1469690 00:23:31.071 [2024-12-05 21:16:39.028510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.071 [2024-12-05 21:16:39.028515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.071 [2024-12-05 21:16:39.028518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.071 [2024-12-05 21:16:39.028521] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cbb80) on tqpair=0x1469690 00:23:31.071 ===================================================== 00:23:31.071 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:31.071 ===================================================== 00:23:31.071 Controller Capabilities/Features 00:23:31.071 ================================ 00:23:31.071 Vendor ID: 8086 00:23:31.071 Subsystem Vendor ID: 8086 00:23:31.071 Serial Number: SPDK00000000000001 00:23:31.071 Model Number: SPDK bdev Controller 00:23:31.071 Firmware Version: 25.01 00:23:31.071 Recommended Arb Burst: 6 00:23:31.071 IEEE OUI Identifier: e4 d2 5c 00:23:31.071 Multi-path I/O 00:23:31.071 May have multiple subsystem ports: Yes 00:23:31.071 May have multiple controllers: Yes 00:23:31.071 Associated with SR-IOV VF: No 00:23:31.071 Max Data Transfer Size: 131072 00:23:31.071 Max Number of Namespaces: 32 00:23:31.071 Max Number of I/O Queues: 127 00:23:31.071 NVMe Specification Version (VS): 1.3 00:23:31.071 NVMe Specification Version (Identify): 1.3 00:23:31.071 Maximum Queue Entries: 128 00:23:31.071 Contiguous Queues Required: Yes 00:23:31.071 Arbitration Mechanisms Supported 00:23:31.071 Weighted Round Robin: Not Supported 00:23:31.071 Vendor Specific: Not Supported 00:23:31.071 Reset Timeout: 15000 ms 00:23:31.071 Doorbell Stride: 4 bytes 00:23:31.071 NVM Subsystem Reset: Not Supported 00:23:31.071 Command Sets Supported 00:23:31.071 NVM Command Set: Supported 00:23:31.071 Boot Partition: Not Supported 00:23:31.071 Memory Page Size Minimum: 4096 bytes 00:23:31.071 Memory Page Size Maximum: 4096 bytes 00:23:31.071 Persistent Memory Region: Not Supported 00:23:31.071 Optional Asynchronous Events Supported 00:23:31.071 Namespace Attribute Notices: Supported 00:23:31.071 Firmware Activation Notices: Not Supported 00:23:31.071 ANA Change Notices: Not Supported 00:23:31.071 PLE Aggregate Log Change Notices: Not Supported 00:23:31.071 LBA Status Info Alert Notices: Not Supported 00:23:31.071 EGE Aggregate Log Change Notices: Not Supported 00:23:31.071 Normal NVM Subsystem Shutdown event: Not Supported 00:23:31.071 Zone Descriptor Change Notices: Not Supported 00:23:31.071 Discovery Log Change Notices: Not Supported 00:23:31.071 Controller Attributes 00:23:31.071 128-bit Host Identifier: Supported 00:23:31.071 Non-Operational Permissive Mode: Not Supported 00:23:31.071 NVM Sets: Not Supported 00:23:31.071 Read Recovery Levels: Not Supported 00:23:31.071 Endurance Groups: Not Supported 00:23:31.071 Predictable Latency Mode: Not Supported 00:23:31.071 Traffic Based Keep ALive: Not Supported 00:23:31.071 Namespace Granularity: Not Supported 00:23:31.071 SQ Associations: Not Supported 00:23:31.071 UUID List: Not Supported 00:23:31.071 Multi-Domain Subsystem: Not Supported 00:23:31.071 Fixed Capacity Management: Not Supported 00:23:31.071 Variable Capacity Management: Not Supported 00:23:31.071 Delete Endurance Group: Not Supported 00:23:31.071 Delete NVM Set: Not Supported 00:23:31.071 Extended LBA Formats Supported: Not Supported 00:23:31.071 Flexible Data Placement Supported: Not Supported 00:23:31.071 00:23:31.071 Controller Memory Buffer Support 00:23:31.071 ================================ 00:23:31.071 Supported: No 00:23:31.071 00:23:31.071 Persistent Memory Region Support 00:23:31.071 ================================ 00:23:31.071 Supported: No 00:23:31.071 00:23:31.071 Admin Command Set Attributes 00:23:31.071 ============================ 00:23:31.071 Security Send/Receive: Not Supported 00:23:31.071 Format NVM: Not Supported 00:23:31.071 Firmware Activate/Download: Not Supported 00:23:31.071 Namespace Management: Not Supported 00:23:31.071 Device Self-Test: Not Supported 00:23:31.071 Directives: Not Supported 00:23:31.071 NVMe-MI: Not Supported 00:23:31.071 Virtualization Management: Not Supported 00:23:31.071 Doorbell Buffer Config: Not Supported 00:23:31.071 Get LBA Status Capability: Not Supported 00:23:31.071 Command & Feature Lockdown Capability: Not Supported 00:23:31.071 Abort Command Limit: 4 00:23:31.071 Async Event Request Limit: 4 00:23:31.071 Number of Firmware Slots: N/A 00:23:31.071 Firmware Slot 1 Read-Only: N/A 00:23:31.071 Firmware Activation Without Reset: N/A 00:23:31.071 Multiple Update Detection Support: N/A 00:23:31.071 Firmware Update Granularity: No Information Provided 00:23:31.071 Per-Namespace SMART Log: No 00:23:31.071 Asymmetric Namespace Access Log Page: Not Supported 00:23:31.071 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:31.071 Command Effects Log Page: Supported 00:23:31.071 Get Log Page Extended Data: Supported 00:23:31.071 Telemetry Log Pages: Not Supported 00:23:31.071 Persistent Event Log Pages: Not Supported 00:23:31.071 Supported Log Pages Log Page: May Support 00:23:31.071 Commands Supported & Effects Log Page: Not Supported 00:23:31.071 Feature Identifiers & Effects Log Page:May Support 00:23:31.071 NVMe-MI Commands & Effects Log Page: May Support 00:23:31.071 Data Area 4 for Telemetry Log: Not Supported 00:23:31.071 Error Log Page Entries Supported: 128 00:23:31.072 Keep Alive: Supported 00:23:31.072 Keep Alive Granularity: 10000 ms 00:23:31.072 00:23:31.072 NVM Command Set Attributes 00:23:31.072 ========================== 00:23:31.072 Submission Queue Entry Size 00:23:31.072 Max: 64 00:23:31.072 Min: 64 00:23:31.072 Completion Queue Entry Size 00:23:31.072 Max: 16 00:23:31.072 Min: 16 00:23:31.072 Number of Namespaces: 32 00:23:31.072 Compare Command: Supported 00:23:31.072 Write Uncorrectable Command: Not Supported 00:23:31.072 Dataset Management Command: Supported 00:23:31.072 Write Zeroes Command: Supported 00:23:31.072 Set Features Save Field: Not Supported 00:23:31.072 Reservations: Supported 00:23:31.072 Timestamp: Not Supported 00:23:31.072 Copy: Supported 00:23:31.072 Volatile Write Cache: Present 00:23:31.072 Atomic Write Unit (Normal): 1 00:23:31.072 Atomic Write Unit (PFail): 1 00:23:31.072 Atomic Compare & Write Unit: 1 00:23:31.072 Fused Compare & Write: Supported 00:23:31.072 Scatter-Gather List 00:23:31.072 SGL Command Set: Supported 00:23:31.072 SGL Keyed: Supported 00:23:31.072 SGL Bit Bucket Descriptor: Not Supported 00:23:31.072 SGL Metadata Pointer: Not Supported 00:23:31.072 Oversized SGL: Not Supported 00:23:31.072 SGL Metadata Address: Not Supported 00:23:31.072 SGL Offset: Supported 00:23:31.072 Transport SGL Data Block: Not Supported 00:23:31.072 Replay Protected Memory Block: Not Supported 00:23:31.072 00:23:31.072 Firmware Slot Information 00:23:31.072 ========================= 00:23:31.072 Active slot: 1 00:23:31.072 Slot 1 Firmware Revision: 25.01 00:23:31.072 00:23:31.072 00:23:31.072 Commands Supported and Effects 00:23:31.072 ============================== 00:23:31.072 Admin Commands 00:23:31.072 -------------- 00:23:31.072 Get Log Page (02h): Supported 00:23:31.072 Identify (06h): Supported 00:23:31.072 Abort (08h): Supported 00:23:31.072 Set Features (09h): Supported 00:23:31.072 Get Features (0Ah): Supported 00:23:31.072 Asynchronous Event Request (0Ch): Supported 00:23:31.072 Keep Alive (18h): Supported 00:23:31.072 I/O Commands 00:23:31.072 ------------ 00:23:31.072 Flush (00h): Supported LBA-Change 00:23:31.072 Write (01h): Supported LBA-Change 00:23:31.072 Read (02h): Supported 00:23:31.072 Compare (05h): Supported 00:23:31.072 Write Zeroes (08h): Supported LBA-Change 00:23:31.072 Dataset Management (09h): Supported LBA-Change 00:23:31.072 Copy (19h): Supported LBA-Change 00:23:31.072 00:23:31.072 Error Log 00:23:31.072 ========= 00:23:31.072 00:23:31.072 Arbitration 00:23:31.072 =========== 00:23:31.072 Arbitration Burst: 1 00:23:31.072 00:23:31.072 Power Management 00:23:31.072 ================ 00:23:31.072 Number of Power States: 1 00:23:31.072 Current Power State: Power State #0 00:23:31.072 Power State #0: 00:23:31.072 Max Power: 0.00 W 00:23:31.072 Non-Operational State: Operational 00:23:31.072 Entry Latency: Not Reported 00:23:31.072 Exit Latency: Not Reported 00:23:31.072 Relative Read Throughput: 0 00:23:31.072 Relative Read Latency: 0 00:23:31.072 Relative Write Throughput: 0 00:23:31.072 Relative Write Latency: 0 00:23:31.072 Idle Power: Not Reported 00:23:31.072 Active Power: Not Reported 00:23:31.072 Non-Operational Permissive Mode: Not Supported 00:23:31.072 00:23:31.072 Health Information 00:23:31.072 ================== 00:23:31.072 Critical Warnings: 00:23:31.072 Available Spare Space: OK 00:23:31.072 Temperature: OK 00:23:31.072 Device Reliability: OK 00:23:31.072 Read Only: No 00:23:31.072 Volatile Memory Backup: OK 00:23:31.072 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:31.072 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:31.072 Available Spare: 0% 00:23:31.072 Available Spare Threshold: 0% 00:23:31.072 Life Percentage Used:[2024-12-05 21:16:39.028602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.072 [2024-12-05 21:16:39.028606] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1469690) 00:23:31.072 [2024-12-05 21:16:39.028612] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.072 [2024-12-05 21:16:39.028625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cbb80, cid 7, qid 0 00:23:31.072 [2024-12-05 21:16:39.028780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.072 [2024-12-05 21:16:39.028786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.072 [2024-12-05 21:16:39.028789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.072 [2024-12-05 21:16:39.028792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cbb80) on tqpair=0x1469690 00:23:31.072 [2024-12-05 21:16:39.028820] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:31.072 [2024-12-05 21:16:39.028829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb100) on tqpair=0x1469690 00:23:31.072 [2024-12-05 21:16:39.028834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.072 [2024-12-05 21:16:39.028839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb280) on tqpair=0x1469690 00:23:31.072 [2024-12-05 21:16:39.028844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.072 [2024-12-05 21:16:39.028849] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb400) on tqpair=0x1469690 00:23:31.072 [2024-12-05 21:16:39.028853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.072 [2024-12-05 21:16:39.028857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.072 [2024-12-05 21:16:39.028861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.072 [2024-12-05 21:16:39.028867] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.072 [2024-12-05 21:16:39.028871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.072 [2024-12-05 21:16:39.028874] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.072 [2024-12-05 21:16:39.028880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.072 [2024-12-05 21:16:39.028891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.072 [2024-12-05 21:16:39.028952] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.072 [2024-12-05 21:16:39.028958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.072 [2024-12-05 21:16:39.028961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.072 [2024-12-05 21:16:39.028964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.072 [2024-12-05 21:16:39.028970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.072 [2024-12-05 21:16:39.028973] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.072 [2024-12-05 21:16:39.028976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.072 [2024-12-05 21:16:39.028981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.072 [2024-12-05 21:16:39.028993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.072 [2024-12-05 21:16:39.029069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.072 [2024-12-05 21:16:39.029075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.072 [2024-12-05 21:16:39.029078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.072 [2024-12-05 21:16:39.029081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.072 [2024-12-05 21:16:39.029085] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:31.072 [2024-12-05 21:16:39.029089] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:31.072 [2024-12-05 21:16:39.029097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.072 [2024-12-05 21:16:39.029100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029103] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.073 [2024-12-05 21:16:39.029109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.073 [2024-12-05 21:16:39.029119] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.073 [2024-12-05 21:16:39.029186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.073 [2024-12-05 21:16:39.029192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.073 [2024-12-05 21:16:39.029195] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.073 [2024-12-05 21:16:39.029206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.073 [2024-12-05 21:16:39.029220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.073 [2024-12-05 21:16:39.029229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.073 [2024-12-05 21:16:39.029286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.073 [2024-12-05 21:16:39.029292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.073 [2024-12-05 21:16:39.029295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.073 [2024-12-05 21:16:39.029306] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.073 [2024-12-05 21:16:39.029318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.073 [2024-12-05 21:16:39.029327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.073 [2024-12-05 21:16:39.029403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.073 [2024-12-05 21:16:39.029409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.073 [2024-12-05 21:16:39.029412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.073 [2024-12-05 21:16:39.029423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.073 [2024-12-05 21:16:39.029435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.073 [2024-12-05 21:16:39.029446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.073 [2024-12-05 21:16:39.029523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.073 [2024-12-05 21:16:39.029528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.073 [2024-12-05 21:16:39.029531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.073 [2024-12-05 21:16:39.029542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.073 [2024-12-05 21:16:39.029554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.073 [2024-12-05 21:16:39.029563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.073 [2024-12-05 21:16:39.029638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.073 [2024-12-05 21:16:39.029643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.073 [2024-12-05 21:16:39.029646] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.073 [2024-12-05 21:16:39.029658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.073 [2024-12-05 21:16:39.029671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.073 [2024-12-05 21:16:39.029681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.073 [2024-12-05 21:16:39.029754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.073 [2024-12-05 21:16:39.029760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.073 [2024-12-05 21:16:39.029763] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.073 [2024-12-05 21:16:39.029774] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029778] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029781] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.073 [2024-12-05 21:16:39.029786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.073 [2024-12-05 21:16:39.029796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.073 [2024-12-05 21:16:39.029856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.073 [2024-12-05 21:16:39.029861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.073 [2024-12-05 21:16:39.029864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.073 [2024-12-05 21:16:39.029877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.073 [2024-12-05 21:16:39.029889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.073 [2024-12-05 21:16:39.029898] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.073 [2024-12-05 21:16:39.029958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.073 [2024-12-05 21:16:39.029963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.073 [2024-12-05 21:16:39.029966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.073 [2024-12-05 21:16:39.029978] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.029984] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.073 [2024-12-05 21:16:39.029990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.073 [2024-12-05 21:16:39.029999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.073 [2024-12-05 21:16:39.030076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.073 [2024-12-05 21:16:39.030081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.073 [2024-12-05 21:16:39.030084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.030087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.073 [2024-12-05 21:16:39.030096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.030099] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.030102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.073 [2024-12-05 21:16:39.030109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.073 [2024-12-05 21:16:39.030119] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.073 [2024-12-05 21:16:39.030192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.073 [2024-12-05 21:16:39.030197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.073 [2024-12-05 21:16:39.030200] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.030204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.073 [2024-12-05 21:16:39.030212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.030215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.030218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.073 [2024-12-05 21:16:39.030224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.073 [2024-12-05 21:16:39.030233] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.073 [2024-12-05 21:16:39.030292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.073 [2024-12-05 21:16:39.030297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.073 [2024-12-05 21:16:39.030300] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.030303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.073 [2024-12-05 21:16:39.030312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.030315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.073 [2024-12-05 21:16:39.030318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.073 [2024-12-05 21:16:39.030324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.073 [2024-12-05 21:16:39.030333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.073 [2024-12-05 21:16:39.030394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.073 [2024-12-05 21:16:39.030400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.074 [2024-12-05 21:16:39.030403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.030406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.074 [2024-12-05 21:16:39.030414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.030418] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.030421] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.074 [2024-12-05 21:16:39.030426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.074 [2024-12-05 21:16:39.030436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.074 [2024-12-05 21:16:39.030512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.074 [2024-12-05 21:16:39.030518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.074 [2024-12-05 21:16:39.030521] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.030524] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.074 [2024-12-05 21:16:39.030532] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.030535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.030538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.074 [2024-12-05 21:16:39.030544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.074 [2024-12-05 21:16:39.030555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.074 [2024-12-05 21:16:39.030628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.074 [2024-12-05 21:16:39.030634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.074 [2024-12-05 21:16:39.030637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.030640] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.074 [2024-12-05 21:16:39.030648] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.030651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.030654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.074 [2024-12-05 21:16:39.030660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.074 [2024-12-05 21:16:39.030669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.074 [2024-12-05 21:16:39.030733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.074 [2024-12-05 21:16:39.030739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.074 [2024-12-05 21:16:39.030742] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.030745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.074 [2024-12-05 21:16:39.030753] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.030757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.030760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.074 [2024-12-05 21:16:39.030766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.074 [2024-12-05 21:16:39.030775] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.074 [2024-12-05 21:16:39.030834] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.074 [2024-12-05 21:16:39.030840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.074 [2024-12-05 21:16:39.030843] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.030846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.074 [2024-12-05 21:16:39.030853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.030857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.030860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.074 [2024-12-05 21:16:39.030866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.074 [2024-12-05 21:16:39.030875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.074 [2024-12-05 21:16:39.030934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.074 [2024-12-05 21:16:39.030940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.074 [2024-12-05 21:16:39.030943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.030946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.074 [2024-12-05 21:16:39.030954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.030957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.030960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.074 [2024-12-05 21:16:39.030966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.074 [2024-12-05 21:16:39.030977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.074 [2024-12-05 21:16:39.031052] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.074 [2024-12-05 21:16:39.031057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.074 [2024-12-05 21:16:39.031060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.031063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.074 [2024-12-05 21:16:39.031071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.031075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.031078] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.074 [2024-12-05 21:16:39.031083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.074 [2024-12-05 21:16:39.031093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.074 [2024-12-05 21:16:39.031153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.074 [2024-12-05 21:16:39.031159] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.074 [2024-12-05 21:16:39.031162] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.031165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.074 [2024-12-05 21:16:39.031174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.031177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.031180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.074 [2024-12-05 21:16:39.031186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.074 [2024-12-05 21:16:39.031195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.074 [2024-12-05 21:16:39.031254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.074 [2024-12-05 21:16:39.031259] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.074 [2024-12-05 21:16:39.031262] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.031265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.074 [2024-12-05 21:16:39.031273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.031277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.031280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.074 [2024-12-05 21:16:39.031285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.074 [2024-12-05 21:16:39.031294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.074 [2024-12-05 21:16:39.034373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.074 [2024-12-05 21:16:39.034388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.074 [2024-12-05 21:16:39.034391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.034394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.074 [2024-12-05 21:16:39.034404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.034408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.034411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1469690) 00:23:31.074 [2024-12-05 21:16:39.034417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.074 [2024-12-05 21:16:39.034428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cb580, cid 3, qid 0 00:23:31.074 [2024-12-05 21:16:39.034574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.074 [2024-12-05 21:16:39.034580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.074 [2024-12-05 21:16:39.034583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.074 [2024-12-05 21:16:39.034586] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cb580) on tqpair=0x1469690 00:23:31.074 [2024-12-05 21:16:39.034593] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:23:31.074 0% 00:23:31.074 Data Units Read: 0 00:23:31.074 Data Units Written: 0 00:23:31.074 Host Read Commands: 0 00:23:31.074 Host Write Commands: 0 00:23:31.074 Controller Busy Time: 0 minutes 00:23:31.074 Power Cycles: 0 00:23:31.074 Power On Hours: 0 hours 00:23:31.074 Unsafe Shutdowns: 0 00:23:31.074 Unrecoverable Media Errors: 0 00:23:31.074 Lifetime Error Log Entries: 0 00:23:31.074 Warning Temperature Time: 0 minutes 00:23:31.074 Critical Temperature Time: 0 minutes 00:23:31.074 00:23:31.074 Number of Queues 00:23:31.074 ================ 00:23:31.074 Number of I/O Submission Queues: 127 00:23:31.074 Number of I/O Completion Queues: 127 00:23:31.074 00:23:31.074 Active Namespaces 00:23:31.074 ================= 00:23:31.074 Namespace ID:1 00:23:31.074 Error Recovery Timeout: Unlimited 00:23:31.074 Command Set Identifier: NVM (00h) 00:23:31.074 Deallocate: Supported 00:23:31.074 Deallocated/Unwritten Error: Not Supported 00:23:31.074 Deallocated Read Value: Unknown 00:23:31.074 Deallocate in Write Zeroes: Not Supported 00:23:31.074 Deallocated Guard Field: 0xFFFF 00:23:31.074 Flush: Supported 00:23:31.074 Reservation: Supported 00:23:31.075 Namespace Sharing Capabilities: Multiple Controllers 00:23:31.075 Size (in LBAs): 131072 (0GiB) 00:23:31.075 Capacity (in LBAs): 131072 (0GiB) 00:23:31.075 Utilization (in LBAs): 131072 (0GiB) 00:23:31.075 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:31.075 EUI64: ABCDEF0123456789 00:23:31.075 UUID: 98ecbbd9-634b-41c7-8c24-9a290d6971fb 00:23:31.075 Thin Provisioning: Not Supported 00:23:31.075 Per-NS Atomic Units: Yes 00:23:31.075 Atomic Boundary Size (Normal): 0 00:23:31.075 Atomic Boundary Size (PFail): 0 00:23:31.075 Atomic Boundary Offset: 0 00:23:31.075 Maximum Single Source Range Length: 65535 00:23:31.075 Maximum Copy Length: 65535 00:23:31.075 Maximum Source Range Count: 1 00:23:31.075 NGUID/EUI64 Never Reused: No 00:23:31.075 Namespace Write Protected: No 00:23:31.075 Number of LBA Formats: 1 00:23:31.075 Current LBA Format: LBA Format #00 00:23:31.075 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:31.075 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:31.075 rmmod nvme_tcp 00:23:31.075 rmmod nvme_fabrics 00:23:31.075 rmmod nvme_keyring 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1391110 ']' 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1391110 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1391110 ']' 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1391110 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.075 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1391110 00:23:31.335 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:31.335 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:31.335 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1391110' 00:23:31.335 killing process with pid 1391110 00:23:31.335 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1391110 00:23:31.335 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1391110 00:23:31.335 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:31.335 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:31.335 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:31.335 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:31.335 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:31.335 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:31.335 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:31.335 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:31.335 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:31.335 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.335 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.335 21:16:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:33.874 00:23:33.874 real 0m9.869s 00:23:33.874 user 0m7.840s 00:23:33.874 sys 0m4.843s 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:33.874 ************************************ 00:23:33.874 END TEST nvmf_identify 00:23:33.874 ************************************ 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.874 ************************************ 00:23:33.874 START TEST nvmf_perf 00:23:33.874 ************************************ 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:33.874 * Looking for test storage... 00:23:33.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:33.874 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:33.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.875 --rc genhtml_branch_coverage=1 00:23:33.875 --rc genhtml_function_coverage=1 00:23:33.875 --rc genhtml_legend=1 00:23:33.875 --rc geninfo_all_blocks=1 00:23:33.875 --rc geninfo_unexecuted_blocks=1 00:23:33.875 00:23:33.875 ' 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:33.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.875 --rc genhtml_branch_coverage=1 00:23:33.875 --rc genhtml_function_coverage=1 00:23:33.875 --rc genhtml_legend=1 00:23:33.875 --rc geninfo_all_blocks=1 00:23:33.875 --rc geninfo_unexecuted_blocks=1 00:23:33.875 00:23:33.875 ' 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:33.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.875 --rc genhtml_branch_coverage=1 00:23:33.875 --rc genhtml_function_coverage=1 00:23:33.875 --rc genhtml_legend=1 00:23:33.875 --rc geninfo_all_blocks=1 00:23:33.875 --rc geninfo_unexecuted_blocks=1 00:23:33.875 00:23:33.875 ' 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:33.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.875 --rc genhtml_branch_coverage=1 00:23:33.875 --rc genhtml_function_coverage=1 00:23:33.875 --rc genhtml_legend=1 00:23:33.875 --rc geninfo_all_blocks=1 00:23:33.875 --rc geninfo_unexecuted_blocks=1 00:23:33.875 00:23:33.875 ' 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:33.875 21:16:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.443 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:40.444 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:40.444 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:40.444 Found net devices under 0000:86:00.0: cvl_0_0 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:40.444 Found net devices under 0000:86:00.1: cvl_0_1 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:40.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:23:40.444 00:23:40.444 --- 10.0.0.2 ping statistics --- 00:23:40.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.444 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:23:40.444 00:23:40.444 --- 10.0.0.1 ping statistics --- 00:23:40.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.444 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1394879 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1394879 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1394879 ']' 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:40.444 [2024-12-05 21:16:47.720181] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:23:40.444 [2024-12-05 21:16:47.720226] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.444 [2024-12-05 21:16:47.797256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:40.444 [2024-12-05 21:16:47.839532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.444 [2024-12-05 21:16:47.839568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.444 [2024-12-05 21:16:47.839576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.444 [2024-12-05 21:16:47.839583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.444 [2024-12-05 21:16:47.839588] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.444 [2024-12-05 21:16:47.841104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.444 [2024-12-05 21:16:47.841211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.444 [2024-12-05 21:16:47.841334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.444 [2024-12-05 21:16:47.841335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:40.444 21:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:42.980 21:16:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:42.980 21:16:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:43.239 21:16:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:43.239 21:16:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:43.498 21:16:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:43.498 21:16:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:43.498 21:16:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:43.498 21:16:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:43.498 21:16:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:43.498 [2024-12-05 21:16:51.588703] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.757 21:16:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:43.757 21:16:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:43.757 21:16:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:44.015 21:16:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:44.015 21:16:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:44.273 21:16:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:44.273 [2024-12-05 21:16:52.363688] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.530 21:16:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:44.530 21:16:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:44.530 21:16:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:44.530 21:16:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:44.530 21:16:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:45.904 Initializing NVMe Controllers 00:23:45.904 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:45.904 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:45.904 Initialization complete. Launching workers. 00:23:45.904 ======================================================== 00:23:45.904 Latency(us) 00:23:45.904 Device Information : IOPS MiB/s Average min max 00:23:45.904 PCIE (0000:5e:00.0) NSID 1 from core 0: 98141.84 383.37 325.39 34.62 4755.58 00:23:45.904 ======================================================== 00:23:45.904 Total : 98141.84 383.37 325.39 34.62 4755.58 00:23:45.904 00:23:45.904 21:16:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:47.281 Initializing NVMe Controllers 00:23:47.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:47.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:47.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:47.281 Initialization complete. Launching workers. 00:23:47.281 ======================================================== 00:23:47.281 Latency(us) 00:23:47.281 Device Information : IOPS MiB/s Average min max 00:23:47.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.00 0.31 12934.89 105.41 45693.83 00:23:47.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15208.23 5026.30 47884.64 00:23:47.281 ======================================================== 00:23:47.281 Total : 145.00 0.57 13969.65 105.41 47884.64 00:23:47.281 00:23:47.281 21:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:48.656 Initializing NVMe Controllers 00:23:48.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:48.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:48.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:48.656 Initialization complete. Launching workers. 00:23:48.656 ======================================================== 00:23:48.656 Latency(us) 00:23:48.656 Device Information : IOPS MiB/s Average min max 00:23:48.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11093.78 43.34 2883.25 439.92 7974.16 00:23:48.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3794.59 14.82 8431.89 5428.27 16146.82 00:23:48.656 ======================================================== 00:23:48.656 Total : 14888.37 58.16 4297.43 439.92 16146.82 00:23:48.656 00:23:48.656 21:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:48.656 21:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:48.656 21:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:51.185 Initializing NVMe Controllers 00:23:51.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:51.185 Controller IO queue size 128, less than required. 00:23:51.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:51.185 Controller IO queue size 128, less than required. 00:23:51.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:51.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:51.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:51.185 Initialization complete. Launching workers. 00:23:51.185 ======================================================== 00:23:51.185 Latency(us) 00:23:51.185 Device Information : IOPS MiB/s Average min max 00:23:51.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1797.93 449.48 72388.60 48758.30 111394.22 00:23:51.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 599.48 149.87 221731.99 72431.45 376391.54 00:23:51.185 ======================================================== 00:23:51.185 Total : 2397.41 599.35 109732.24 48758.30 376391.54 00:23:51.185 00:23:51.185 21:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:51.185 No valid NVMe controllers or AIO or URING devices found 00:23:51.185 Initializing NVMe Controllers 00:23:51.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:51.185 Controller IO queue size 128, less than required. 00:23:51.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:51.185 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:51.185 Controller IO queue size 128, less than required. 00:23:51.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:51.185 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:51.185 WARNING: Some requested NVMe devices were skipped 00:23:51.185 21:16:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:53.723 Initializing NVMe Controllers 00:23:53.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:53.723 Controller IO queue size 128, less than required. 00:23:53.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:53.723 Controller IO queue size 128, less than required. 00:23:53.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:53.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:53.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:53.723 Initialization complete. Launching workers. 00:23:53.723 00:23:53.723 ==================== 00:23:53.723 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:53.723 TCP transport: 00:23:53.723 polls: 15755 00:23:53.723 idle_polls: 11620 00:23:53.723 sock_completions: 4135 00:23:53.723 nvme_completions: 6351 00:23:53.723 submitted_requests: 9526 00:23:53.723 queued_requests: 1 00:23:53.723 00:23:53.723 ==================== 00:23:53.723 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:53.723 TCP transport: 00:23:53.723 polls: 15940 00:23:53.723 idle_polls: 11660 00:23:53.723 sock_completions: 4280 00:23:53.723 nvme_completions: 6497 00:23:53.723 submitted_requests: 9708 00:23:53.723 queued_requests: 1 00:23:53.723 ======================================================== 00:23:53.723 Latency(us) 00:23:53.723 Device Information : IOPS MiB/s Average min max 00:23:53.723 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1587.40 396.85 83320.36 49506.12 152841.21 00:23:53.723 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1623.90 405.98 79530.26 41143.30 125935.79 00:23:53.723 ======================================================== 00:23:53.723 Total : 3211.31 802.83 81403.77 41143.30 152841.21 00:23:53.723 00:23:53.723 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:53.723 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:53.983 rmmod nvme_tcp 00:23:53.983 rmmod nvme_fabrics 00:23:53.983 rmmod nvme_keyring 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1394879 ']' 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1394879 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1394879 ']' 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1394879 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1394879 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1394879' 00:23:53.983 killing process with pid 1394879 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1394879 00:23:53.983 21:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1394879 00:23:56.519 21:17:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:56.519 21:17:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:56.519 21:17:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:56.519 21:17:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:56.519 21:17:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:56.519 21:17:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:56.519 21:17:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:56.519 21:17:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:56.519 21:17:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:56.519 21:17:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.519 21:17:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.519 21:17:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:58.421 00:23:58.421 real 0m24.632s 00:23:58.421 user 1m4.333s 00:23:58.421 sys 0m8.291s 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:58.421 ************************************ 00:23:58.421 END TEST nvmf_perf 00:23:58.421 ************************************ 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.421 ************************************ 00:23:58.421 START TEST nvmf_fio_host 00:23:58.421 ************************************ 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:58.421 * Looking for test storage... 00:23:58.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:58.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.421 --rc genhtml_branch_coverage=1 00:23:58.421 --rc genhtml_function_coverage=1 00:23:58.421 --rc genhtml_legend=1 00:23:58.421 --rc geninfo_all_blocks=1 00:23:58.421 --rc geninfo_unexecuted_blocks=1 00:23:58.421 00:23:58.421 ' 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:58.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.421 --rc genhtml_branch_coverage=1 00:23:58.421 --rc genhtml_function_coverage=1 00:23:58.421 --rc genhtml_legend=1 00:23:58.421 --rc geninfo_all_blocks=1 00:23:58.421 --rc geninfo_unexecuted_blocks=1 00:23:58.421 00:23:58.421 ' 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:58.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.421 --rc genhtml_branch_coverage=1 00:23:58.421 --rc genhtml_function_coverage=1 00:23:58.421 --rc genhtml_legend=1 00:23:58.421 --rc geninfo_all_blocks=1 00:23:58.421 --rc geninfo_unexecuted_blocks=1 00:23:58.421 00:23:58.421 ' 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:58.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.421 --rc genhtml_branch_coverage=1 00:23:58.421 --rc genhtml_function_coverage=1 00:23:58.421 --rc genhtml_legend=1 00:23:58.421 --rc geninfo_all_blocks=1 00:23:58.421 --rc geninfo_unexecuted_blocks=1 00:23:58.421 00:23:58.421 ' 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.421 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:58.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:58.422 21:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:05.014 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:05.014 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.014 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:05.015 Found net devices under 0000:86:00.0: cvl_0_0 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:05.015 Found net devices under 0000:86:00.1: cvl_0_1 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:05.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:24:05.015 00:24:05.015 --- 10.0.0.2 ping statistics --- 00:24:05.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.015 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:24:05.015 00:24:05.015 --- 10.0.0.1 ping statistics --- 00:24:05.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.015 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1400996 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1400996 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1400996 ']' 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.015 21:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.015 [2024-12-05 21:17:12.432572] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:24:05.015 [2024-12-05 21:17:12.432617] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.015 [2024-12-05 21:17:12.510131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:05.015 [2024-12-05 21:17:12.552242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.015 [2024-12-05 21:17:12.552277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.015 [2024-12-05 21:17:12.552284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.015 [2024-12-05 21:17:12.552290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.015 [2024-12-05 21:17:12.552295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.015 [2024-12-05 21:17:12.553799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.015 [2024-12-05 21:17:12.553905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.015 [2024-12-05 21:17:12.554013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.015 [2024-12-05 21:17:12.554014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.274 21:17:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.274 21:17:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:05.274 21:17:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:05.531 [2024-12-05 21:17:13.457863] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.531 21:17:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:05.531 21:17:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.531 21:17:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.531 21:17:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:05.789 Malloc1 00:24:05.789 21:17:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:06.047 21:17:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:06.047 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:06.305 [2024-12-05 21:17:14.291775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.305 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:06.563 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:06.564 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:06.564 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:06.564 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:06.564 21:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:06.822 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:06.822 fio-3.35 00:24:06.822 Starting 1 thread 00:24:09.345 00:24:09.345 test: (groupid=0, jobs=1): err= 0: pid=1401592: Thu Dec 5 21:17:17 2024 00:24:09.345 read: IOPS=11.9k, BW=46.4MiB/s (48.7MB/s)(93.1MiB/2005msec) 00:24:09.345 slat (nsec): min=1540, max=242457, avg=1714.16, stdev=2190.57 00:24:09.345 clat (usec): min=3041, max=9739, avg=5938.40, stdev=451.56 00:24:09.345 lat (usec): min=3074, max=9741, avg=5940.11, stdev=451.42 00:24:09.345 clat percentiles (usec): 00:24:09.345 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5604], 00:24:09.345 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:24:09.345 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:24:09.345 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 7898], 99.95th=[ 8717], 00:24:09.345 | 99.99th=[ 9241] 00:24:09.346 bw ( KiB/s): min=46456, max=48272, per=99.98%, avg=47546.00, stdev=795.12, samples=4 00:24:09.346 iops : min=11614, max=12068, avg=11886.50, stdev=198.78, samples=4 00:24:09.346 write: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(92.7MiB/2005msec); 0 zone resets 00:24:09.346 slat (nsec): min=1581, max=231975, avg=1777.38, stdev=1681.20 00:24:09.346 clat (usec): min=2457, max=9691, avg=4803.43, stdev=379.68 00:24:09.346 lat (usec): min=2473, max=9692, avg=4805.21, stdev=379.62 00:24:09.346 clat percentiles (usec): 00:24:09.346 | 1.00th=[ 3916], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:24:09.346 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:24:09.346 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:24:09.346 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7963], 99.95th=[ 8979], 00:24:09.346 | 99.99th=[ 9241] 00:24:09.346 bw ( KiB/s): min=47040, max=47872, per=99.99%, avg=47332.00, stdev=388.94, samples=4 00:24:09.346 iops : min=11760, max=11968, avg=11833.00, stdev=97.24, samples=4 00:24:09.346 lat (msec) : 4=0.77%, 10=99.23% 00:24:09.346 cpu : usr=71.41%, sys=27.64%, ctx=79, majf=0, minf=2 00:24:09.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:09.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:09.346 issued rwts: total=23838,23728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:09.346 00:24:09.346 Run status group 0 (all jobs): 00:24:09.346 READ: bw=46.4MiB/s (48.7MB/s), 46.4MiB/s-46.4MiB/s (48.7MB/s-48.7MB/s), io=93.1MiB (97.6MB), run=2005-2005msec 00:24:09.346 WRITE: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=92.7MiB (97.2MB), run=2005-2005msec 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:09.346 21:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:09.346 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:09.346 fio-3.35 00:24:09.346 Starting 1 thread 00:24:11.946 00:24:11.946 test: (groupid=0, jobs=1): err= 0: pid=1402132: Thu Dec 5 21:17:19 2024 00:24:11.946 read: IOPS=10.9k, BW=171MiB/s (179MB/s)(342MiB/2006msec) 00:24:11.946 slat (nsec): min=2485, max=86860, avg=2793.97, stdev=1276.83 00:24:11.946 clat (usec): min=1413, max=13765, avg=6811.13, stdev=1700.68 00:24:11.946 lat (usec): min=1427, max=13780, avg=6813.93, stdev=1700.83 00:24:11.946 clat percentiles (usec): 00:24:11.946 | 1.00th=[ 3654], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5342], 00:24:11.946 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 7177], 00:24:11.946 | 70.00th=[ 7635], 80.00th=[ 8094], 90.00th=[ 8717], 95.00th=[ 9765], 00:24:11.946 | 99.00th=[11994], 99.50th=[12518], 99.90th=[13173], 99.95th=[13173], 00:24:11.946 | 99.99th=[13566] 00:24:11.946 bw ( KiB/s): min=83936, max=95360, per=50.26%, avg=87784.00, stdev=5275.27, samples=4 00:24:11.946 iops : min= 5246, max= 5960, avg=5486.50, stdev=329.70, samples=4 00:24:11.946 write: IOPS=6486, BW=101MiB/s (106MB/s)(180MiB/1771msec); 0 zone resets 00:24:11.946 slat (usec): min=27, max=380, avg=31.35, stdev= 7.79 00:24:11.946 clat (usec): min=4690, max=15702, avg=8564.64, stdev=1480.16 00:24:11.946 lat (usec): min=4721, max=15814, avg=8595.99, stdev=1482.19 00:24:11.946 clat percentiles (usec): 00:24:11.946 | 1.00th=[ 5735], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7373], 00:24:11.946 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717], 00:24:11.946 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10552], 95.00th=[11338], 00:24:11.946 | 99.00th=[12911], 99.50th=[13960], 99.90th=[14877], 99.95th=[15270], 00:24:11.946 | 99.99th=[15664] 00:24:11.946 bw ( KiB/s): min=87712, max=99200, per=88.16%, avg=91496.00, stdev=5332.87, samples=4 00:24:11.946 iops : min= 5482, max= 6200, avg=5718.50, stdev=333.30, samples=4 00:24:11.946 lat (msec) : 2=0.06%, 4=1.81%, 10=90.00%, 20=8.13% 00:24:11.946 cpu : usr=84.89%, sys=14.06%, ctx=52, majf=0, minf=2 00:24:11.946 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:11.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:11.946 issued rwts: total=21900,11488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:11.946 00:24:11.946 Run status group 0 (all jobs): 00:24:11.946 READ: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=342MiB (359MB), run=2006-2006msec 00:24:11.946 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=180MiB (188MB), run=1771-1771msec 00:24:11.946 21:17:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.946 21:17:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:11.946 21:17:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:11.946 21:17:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:11.946 21:17:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:11.946 21:17:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:11.946 21:17:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:11.946 21:17:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:11.946 21:17:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:11.946 21:17:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:11.946 21:17:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:11.946 rmmod nvme_tcp 00:24:11.946 rmmod nvme_fabrics 00:24:11.946 rmmod nvme_keyring 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1400996 ']' 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1400996 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1400996 ']' 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1400996 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1400996 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1400996' 00:24:12.233 killing process with pid 1400996 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1400996 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1400996 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.233 21:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.768 21:17:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:14.768 00:24:14.768 real 0m16.169s 00:24:14.768 user 0m47.592s 00:24:14.768 sys 0m6.563s 00:24:14.768 21:17:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:14.768 21:17:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.768 ************************************ 00:24:14.768 END TEST nvmf_fio_host 00:24:14.768 ************************************ 00:24:14.768 21:17:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:14.768 21:17:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:14.768 21:17:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:14.768 21:17:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.768 ************************************ 00:24:14.768 START TEST nvmf_failover 00:24:14.768 ************************************ 00:24:14.768 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:14.768 * Looking for test storage... 00:24:14.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:14.768 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:14.768 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:24:14.768 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:14.768 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:14.768 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:14.768 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:14.768 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:14.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.769 --rc genhtml_branch_coverage=1 00:24:14.769 --rc genhtml_function_coverage=1 00:24:14.769 --rc genhtml_legend=1 00:24:14.769 --rc geninfo_all_blocks=1 00:24:14.769 --rc geninfo_unexecuted_blocks=1 00:24:14.769 00:24:14.769 ' 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:14.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.769 --rc genhtml_branch_coverage=1 00:24:14.769 --rc genhtml_function_coverage=1 00:24:14.769 --rc genhtml_legend=1 00:24:14.769 --rc geninfo_all_blocks=1 00:24:14.769 --rc geninfo_unexecuted_blocks=1 00:24:14.769 00:24:14.769 ' 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:14.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.769 --rc genhtml_branch_coverage=1 00:24:14.769 --rc genhtml_function_coverage=1 00:24:14.769 --rc genhtml_legend=1 00:24:14.769 --rc geninfo_all_blocks=1 00:24:14.769 --rc geninfo_unexecuted_blocks=1 00:24:14.769 00:24:14.769 ' 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:14.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.769 --rc genhtml_branch_coverage=1 00:24:14.769 --rc genhtml_function_coverage=1 00:24:14.769 --rc genhtml_legend=1 00:24:14.769 --rc geninfo_all_blocks=1 00:24:14.769 --rc geninfo_unexecuted_blocks=1 00:24:14.769 00:24:14.769 ' 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:14.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:14.769 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:14.770 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:14.770 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:14.770 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:14.770 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.770 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:14.770 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:14.770 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:14.770 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.770 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.770 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.770 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:14.770 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:14.770 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:14.770 21:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:21.337 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:21.337 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:21.337 Found net devices under 0000:86:00.0: cvl_0_0 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:21.337 Found net devices under 0000:86:00.1: cvl_0_1 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.337 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:21.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:24:21.338 00:24:21.338 --- 10.0.0.2 ping statistics --- 00:24:21.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.338 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:24:21.338 00:24:21.338 --- 10.0.0.1 ping statistics --- 00:24:21.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.338 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1405943 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1405943 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1405943 ']' 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.338 21:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:21.338 [2024-12-05 21:17:28.661010] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:24:21.338 [2024-12-05 21:17:28.661052] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.338 [2024-12-05 21:17:28.739027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:21.338 [2024-12-05 21:17:28.783343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.338 [2024-12-05 21:17:28.783383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.338 [2024-12-05 21:17:28.783394] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.338 [2024-12-05 21:17:28.783400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.338 [2024-12-05 21:17:28.783405] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.338 [2024-12-05 21:17:28.787386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.338 [2024-12-05 21:17:28.787475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.338 [2024-12-05 21:17:28.787476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:21.596 21:17:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.596 21:17:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:21.596 21:17:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:21.596 21:17:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.596 21:17:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:21.596 21:17:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.596 21:17:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:21.596 [2024-12-05 21:17:29.690082] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.854 21:17:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:21.854 Malloc0 00:24:21.854 21:17:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:22.112 21:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:22.370 21:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.628 [2024-12-05 21:17:30.530279] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.628 21:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:22.886 [2024-12-05 21:17:30.742861] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:22.886 21:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:22.886 [2024-12-05 21:17:30.939501] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:22.886 21:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:22.886 21:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1406426 00:24:22.886 21:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:22.886 21:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1406426 /var/tmp/bdevperf.sock 00:24:22.886 21:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1406426 ']' 00:24:22.886 21:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.886 21:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.886 21:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.886 21:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.886 21:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:23.144 21:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.144 21:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:23.144 21:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:23.401 NVMe0n1 00:24:23.401 21:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:23.967 00:24:23.967 21:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1406546 00:24:23.967 21:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:23.967 21:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:24.904 21:17:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:25.163 21:17:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:28.452 21:17:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:28.452 00:24:28.452 21:17:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:28.710 [2024-12-05 21:17:36.704059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 [2024-12-05 21:17:36.704282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5c40 is same with the state(6) to be set 00:24:28.710 21:17:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:31.991 21:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:31.991 [2024-12-05 21:17:39.914573] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.991 21:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:32.929 21:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:33.187 21:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1406546 00:24:39.758 { 00:24:39.758 "results": [ 00:24:39.758 { 00:24:39.758 "job": "NVMe0n1", 00:24:39.758 "core_mask": "0x1", 00:24:39.758 "workload": "verify", 00:24:39.758 "status": "finished", 00:24:39.758 "verify_range": { 00:24:39.758 "start": 0, 00:24:39.758 "length": 16384 00:24:39.758 }, 00:24:39.758 "queue_depth": 128, 00:24:39.758 "io_size": 4096, 00:24:39.758 "runtime": 15.009372, 00:24:39.758 "iops": 11303.204424542213, 00:24:39.758 "mibps": 44.15314228336802, 00:24:39.758 "io_failed": 11861, 00:24:39.758 "io_timeout": 0, 00:24:39.758 "avg_latency_us": 10562.71156983222, 00:24:39.758 "min_latency_us": 421.30285714285714, 00:24:39.758 "max_latency_us": 19099.062857142857 00:24:39.758 } 00:24:39.758 ], 00:24:39.758 "core_count": 1 00:24:39.758 } 00:24:39.758 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1406426 00:24:39.758 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1406426 ']' 00:24:39.758 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1406426 00:24:39.758 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:39.758 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:39.758 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1406426 00:24:39.758 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:39.758 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:39.758 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1406426' 00:24:39.758 killing process with pid 1406426 00:24:39.758 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1406426 00:24:39.758 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1406426 00:24:39.758 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:39.758 [2024-12-05 21:17:31.003016] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:24:39.758 [2024-12-05 21:17:31.003070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1406426 ] 00:24:39.758 [2024-12-05 21:17:31.075424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.758 [2024-12-05 21:17:31.116506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.758 Running I/O for 15 seconds... 00:24:39.758 11325.00 IOPS, 44.24 MiB/s [2024-12-05T20:17:47.866Z] [2024-12-05 21:17:33.067413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.758 [2024-12-05 21:17:33.067452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.758 [2024-12-05 21:17:33.067462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.758 [2024-12-05 21:17:33.067469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.758 [2024-12-05 21:17:33.067477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.758 [2024-12-05 21:17:33.067484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.758 [2024-12-05 21:17:33.067491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.758 [2024-12-05 21:17:33.067498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.758 [2024-12-05 21:17:33.067504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f4fa0 is same with the state(6) to be set 00:24:39.758 [2024-12-05 21:17:33.067568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.758 [2024-12-05 21:17:33.067577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.758 [2024-12-05 21:17:33.067590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.067988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.067995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.068003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.068010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.068018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.068025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.068033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.068040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.068048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.068060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.068068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.068075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.068083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.068089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.068097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.068104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.068112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.068119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.068129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.068135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.068143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.068150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.068158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.068164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.068172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.068179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.759 [2024-12-05 21:17:33.068187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.759 [2024-12-05 21:17:33.068194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.760 [2024-12-05 21:17:33.068208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.760 [2024-12-05 21:17:33.068223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.760 [2024-12-05 21:17:33.068238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.760 [2024-12-05 21:17:33.068254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.760 [2024-12-05 21:17:33.068269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.760 [2024-12-05 21:17:33.068284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.760 [2024-12-05 21:17:33.068298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.760 [2024-12-05 21:17:33.068312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.760 [2024-12-05 21:17:33.068327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.760 [2024-12-05 21:17:33.068341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.760 [2024-12-05 21:17:33.068356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.760 [2024-12-05 21:17:33.068380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.760 [2024-12-05 21:17:33.068394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.760 [2024-12-05 21:17:33.068410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.760 [2024-12-05 21:17:33.068425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.760 [2024-12-05 21:17:33.068439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.760 [2024-12-05 21:17:33.068780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.760 [2024-12-05 21:17:33.068788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.760 [2024-12-05 21:17:33.068794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.068802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.761 [2024-12-05 21:17:33.068809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.068819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.761 [2024-12-05 21:17:33.068826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.068834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.761 [2024-12-05 21:17:33.068841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.068849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.761 [2024-12-05 21:17:33.068855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.068863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.761 [2024-12-05 21:17:33.068870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.068878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.761 [2024-12-05 21:17:33.068884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.068892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.761 [2024-12-05 21:17:33.068899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.068906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.068913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.068921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.068927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.068936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.068943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.068950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.068956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.068964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.068972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.068980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.068986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.068994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.761 [2024-12-05 21:17:33.069376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.761 [2024-12-05 21:17:33.069385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:33.069392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:33.069401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:33.069407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:33.069416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:33.069422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:33.069430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:33.069437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:33.069446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:33.069452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:33.069461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:33.069467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:33.069475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:33.069482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:33.069500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.762 [2024-12-05 21:17:33.069506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.762 [2024-12-05 21:17:33.069513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99304 len:8 PRP1 0x0 PRP2 0x0 00:24:39.762 [2024-12-05 21:17:33.069519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:33.069562] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:39.762 [2024-12-05 21:17:33.069571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:39.762 [2024-12-05 21:17:33.072359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:39.762 [2024-12-05 21:17:33.072390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f4fa0 (9): Bad file descriptor 00:24:39.762 [2024-12-05 21:17:33.215935] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:39.762 10588.00 IOPS, 41.36 MiB/s [2024-12-05T20:17:47.870Z] 10886.33 IOPS, 42.52 MiB/s [2024-12-05T20:17:47.870Z] 11038.75 IOPS, 43.12 MiB/s [2024-12-05T20:17:47.870Z] [2024-12-05 21:17:36.705005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:36.705038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:36.705065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:36.705081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:36.705096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:36.705112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:36.705127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:36.705141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:36.705156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:36.705170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:36.705184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:36.705198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:36.705212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:36.705226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:36.705240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:36.705257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.762 [2024-12-05 21:17:36.705271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.762 [2024-12-05 21:17:36.705286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.762 [2024-12-05 21:17:36.705300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.762 [2024-12-05 21:17:36.705314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.762 [2024-12-05 21:17:36.705329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.762 [2024-12-05 21:17:36.705343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.762 [2024-12-05 21:17:36.705357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.762 [2024-12-05 21:17:36.705377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.762 [2024-12-05 21:17:36.705391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.762 [2024-12-05 21:17:36.705406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.762 [2024-12-05 21:17:36.705419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.762 [2024-12-05 21:17:36.705435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.762 [2024-12-05 21:17:36.705444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.763 [2024-12-05 21:17:36.705942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.763 [2024-12-05 21:17:36.705950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.705956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.705965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.705971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.705979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.705987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.705995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.764 [2024-12-05 21:17:36.706430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.764 [2024-12-05 21:17:36.706458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83624 len:8 PRP1 0x0 PRP2 0x0 00:24:39.764 [2024-12-05 21:17:36.706465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.764 [2024-12-05 21:17:36.706479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.764 [2024-12-05 21:17:36.706485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83632 len:8 PRP1 0x0 PRP2 0x0 00:24:39.764 [2024-12-05 21:17:36.706491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.764 [2024-12-05 21:17:36.706503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.764 [2024-12-05 21:17:36.706508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83640 len:8 PRP1 0x0 PRP2 0x0 00:24:39.764 [2024-12-05 21:17:36.706514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.764 [2024-12-05 21:17:36.706525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.764 [2024-12-05 21:17:36.706531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83648 len:8 PRP1 0x0 PRP2 0x0 00:24:39.764 [2024-12-05 21:17:36.706537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.764 [2024-12-05 21:17:36.706543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.764 [2024-12-05 21:17:36.706549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.764 [2024-12-05 21:17:36.706554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83656 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.706579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83664 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.706601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83672 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.706624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83680 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.706647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83688 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.706670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83696 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.706693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83704 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.706717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83712 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.706741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83720 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.706764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83728 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.706787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83736 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.706816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83744 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.706840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83752 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.706863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83760 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.706887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83768 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.706909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83776 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.706934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83784 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.706957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83792 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.706980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83800 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.706987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.706993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.706998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.707003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83808 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.707010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.707016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.707021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.707026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83816 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.717643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.717655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.717661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.717667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83824 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.717674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.717680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.717685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.717691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83832 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.717697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.717704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.765 [2024-12-05 21:17:36.717709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.765 [2024-12-05 21:17:36.717715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83840 len:8 PRP1 0x0 PRP2 0x0 00:24:39.765 [2024-12-05 21:17:36.717721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.765 [2024-12-05 21:17:36.717731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.766 [2024-12-05 21:17:36.717737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.766 [2024-12-05 21:17:36.717742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83848 len:8 PRP1 0x0 PRP2 0x0 00:24:39.766 [2024-12-05 21:17:36.717748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:36.717755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.766 [2024-12-05 21:17:36.717760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.766 [2024-12-05 21:17:36.717765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83856 len:8 PRP1 0x0 PRP2 0x0 00:24:39.766 [2024-12-05 21:17:36.717771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:36.717777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.766 [2024-12-05 21:17:36.717782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.766 [2024-12-05 21:17:36.717788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83864 len:8 PRP1 0x0 PRP2 0x0 00:24:39.766 [2024-12-05 21:17:36.717794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:36.717801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.766 [2024-12-05 21:17:36.717806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.766 [2024-12-05 21:17:36.717811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83872 len:8 PRP1 0x0 PRP2 0x0 00:24:39.766 [2024-12-05 21:17:36.717818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:36.717824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.766 [2024-12-05 21:17:36.717829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.766 [2024-12-05 21:17:36.717834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83880 len:8 PRP1 0x0 PRP2 0x0 00:24:39.766 [2024-12-05 21:17:36.717841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:36.717883] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:39.766 [2024-12-05 21:17:36.717905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.766 [2024-12-05 21:17:36.717912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:36.717919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.766 [2024-12-05 21:17:36.717926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:36.717933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.766 [2024-12-05 21:17:36.717939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:36.717946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.766 [2024-12-05 21:17:36.717952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:36.717961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:39.766 [2024-12-05 21:17:36.717994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f4fa0 (9): Bad file descriptor 00:24:39.766 [2024-12-05 21:17:36.721215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:39.766 [2024-12-05 21:17:36.791570] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:39.766 10964.00 IOPS, 42.83 MiB/s [2024-12-05T20:17:47.874Z] 11071.67 IOPS, 43.25 MiB/s [2024-12-05T20:17:47.874Z] 11133.29 IOPS, 43.49 MiB/s [2024-12-05T20:17:47.874Z] 11178.62 IOPS, 43.67 MiB/s [2024-12-05T20:17:47.874Z] 11196.11 IOPS, 43.73 MiB/s [2024-12-05T20:17:47.874Z] [2024-12-05 21:17:41.131453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.766 [2024-12-05 21:17:41.131861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.766 [2024-12-05 21:17:41.131868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.131876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.131882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.131890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.131897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.131905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.131911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.131919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.131926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.131934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.131940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.131948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.131955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.131963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.131970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.131978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.131984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.131992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.131999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.767 [2024-12-05 21:17:41.132357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.767 [2024-12-05 21:17:41.132365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.767 [2024-12-05 21:17:41.132376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:115248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:115296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:115416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:115464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.768 [2024-12-05 21:17:41.132958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.768 [2024-12-05 21:17:41.132966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.132972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.132980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.132986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.132994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.769 [2024-12-05 21:17:41.133311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.769 [2024-12-05 21:17:41.133348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115664 len:8 PRP1 0x0 PRP2 0x0 00:24:39.769 [2024-12-05 21:17:41.133355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.769 [2024-12-05 21:17:41.133374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.769 [2024-12-05 21:17:41.133380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115672 len:8 PRP1 0x0 PRP2 0x0 00:24:39.769 [2024-12-05 21:17:41.133386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.769 [2024-12-05 21:17:41.133399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.769 [2024-12-05 21:17:41.133405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115680 len:8 PRP1 0x0 PRP2 0x0 00:24:39.769 [2024-12-05 21:17:41.133411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.769 [2024-12-05 21:17:41.133423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.769 [2024-12-05 21:17:41.133428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115688 len:8 PRP1 0x0 PRP2 0x0 00:24:39.769 [2024-12-05 21:17:41.133434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.769 [2024-12-05 21:17:41.133445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.769 [2024-12-05 21:17:41.133451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115696 len:8 PRP1 0x0 PRP2 0x0 00:24:39.769 [2024-12-05 21:17:41.133457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133500] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:39.769 [2024-12-05 21:17:41.133521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.769 [2024-12-05 21:17:41.133529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.769 [2024-12-05 21:17:41.133543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.769 [2024-12-05 21:17:41.133557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.769 [2024-12-05 21:17:41.133570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.769 [2024-12-05 21:17:41.133577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:39.769 [2024-12-05 21:17:41.136365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:39.769 [2024-12-05 21:17:41.136402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f4fa0 (9): Bad file descriptor 00:24:39.769 [2024-12-05 21:17:41.161126] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:39.769 11189.10 IOPS, 43.71 MiB/s [2024-12-05T20:17:47.877Z] 11210.91 IOPS, 43.79 MiB/s [2024-12-05T20:17:47.877Z] 11245.58 IOPS, 43.93 MiB/s [2024-12-05T20:17:47.877Z] 11261.85 IOPS, 43.99 MiB/s [2024-12-05T20:17:47.877Z] 11296.86 IOPS, 44.13 MiB/s [2024-12-05T20:17:47.877Z] 11301.80 IOPS, 44.15 MiB/s 00:24:39.769 Latency(us) 00:24:39.769 [2024-12-05T20:17:47.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.769 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:39.770 Verification LBA range: start 0x0 length 0x4000 00:24:39.770 NVMe0n1 : 15.01 11303.20 44.15 790.24 0.00 10562.71 421.30 19099.06 00:24:39.770 [2024-12-05T20:17:47.878Z] =================================================================================================================== 00:24:39.770 [2024-12-05T20:17:47.878Z] Total : 11303.20 44.15 790.24 0.00 10562.71 421.30 19099.06 00:24:39.770 Received shutdown signal, test time was about 15.000000 seconds 00:24:39.770 00:24:39.770 Latency(us) 00:24:39.770 [2024-12-05T20:17:47.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.770 [2024-12-05T20:17:47.878Z] =================================================================================================================== 00:24:39.770 [2024-12-05T20:17:47.878Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:39.770 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:39.770 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:39.770 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:39.770 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1408961 00:24:39.770 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:39.770 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1408961 /var/tmp/bdevperf.sock 00:24:39.770 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1408961 ']' 00:24:39.770 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.770 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.770 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.770 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.770 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:39.770 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.770 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:39.770 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:39.770 [2024-12-05 21:17:47.708280] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:39.770 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:40.028 [2024-12-05 21:17:47.904817] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:40.028 21:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:40.287 NVMe0n1 00:24:40.287 21:17:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:40.546 00:24:40.546 21:17:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:41.114 00:24:41.114 21:17:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:41.114 21:17:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:41.114 21:17:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.373 21:17:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:44.659 21:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:44.659 21:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:44.659 21:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1409884 00:24:44.659 21:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:44.659 21:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1409884 00:24:45.596 { 00:24:45.596 "results": [ 00:24:45.596 { 00:24:45.596 "job": "NVMe0n1", 00:24:45.596 "core_mask": "0x1", 00:24:45.596 "workload": "verify", 00:24:45.596 "status": "finished", 00:24:45.596 "verify_range": { 00:24:45.596 "start": 0, 00:24:45.596 "length": 16384 00:24:45.596 }, 00:24:45.596 "queue_depth": 128, 00:24:45.596 "io_size": 4096, 00:24:45.596 "runtime": 1.005814, 00:24:45.596 "iops": 11459.375192630048, 00:24:45.596 "mibps": 44.763184346211126, 00:24:45.596 "io_failed": 0, 00:24:45.596 "io_timeout": 0, 00:24:45.596 "avg_latency_us": 11116.629221222413, 00:24:45.596 "min_latency_us": 1466.7580952380952, 00:24:45.596 "max_latency_us": 9299.870476190476 00:24:45.596 } 00:24:45.596 ], 00:24:45.596 "core_count": 1 00:24:45.596 } 00:24:45.855 21:17:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:45.855 [2024-12-05 21:17:47.324480] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:24:45.855 [2024-12-05 21:17:47.324535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1408961 ] 00:24:45.855 [2024-12-05 21:17:47.399712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.855 [2024-12-05 21:17:47.438033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.855 [2024-12-05 21:17:49.365739] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:45.855 [2024-12-05 21:17:49.365782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.855 [2024-12-05 21:17:49.365794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.855 [2024-12-05 21:17:49.365802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.855 [2024-12-05 21:17:49.365808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.855 [2024-12-05 21:17:49.365815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.855 [2024-12-05 21:17:49.365822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.855 [2024-12-05 21:17:49.365829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.855 [2024-12-05 21:17:49.365836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.855 [2024-12-05 21:17:49.365843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:45.855 [2024-12-05 21:17:49.365868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:45.855 [2024-12-05 21:17:49.365883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a14fa0 (9): Bad file descriptor 00:24:45.855 [2024-12-05 21:17:49.457575] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:45.855 Running I/O for 1 seconds... 00:24:45.855 11374.00 IOPS, 44.43 MiB/s 00:24:45.855 Latency(us) 00:24:45.855 [2024-12-05T20:17:53.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.855 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:45.855 Verification LBA range: start 0x0 length 0x4000 00:24:45.855 NVMe0n1 : 1.01 11459.38 44.76 0.00 0.00 11116.63 1466.76 9299.87 00:24:45.855 [2024-12-05T20:17:53.963Z] =================================================================================================================== 00:24:45.855 [2024-12-05T20:17:53.963Z] Total : 11459.38 44.76 0.00 0.00 11116.63 1466.76 9299.87 00:24:45.855 21:17:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:45.855 21:17:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:45.855 21:17:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:46.115 21:17:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:46.115 21:17:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:46.374 21:17:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:46.632 21:17:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:49.930 21:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:49.930 21:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:49.930 21:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1408961 00:24:49.930 21:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1408961 ']' 00:24:49.930 21:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1408961 00:24:49.930 21:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:49.930 21:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.930 21:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1408961 00:24:49.930 21:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:49.930 21:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:49.930 21:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1408961' 00:24:49.930 killing process with pid 1408961 00:24:49.930 21:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1408961 00:24:49.930 21:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1408961 00:24:49.930 21:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:49.930 21:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:50.189 rmmod nvme_tcp 00:24:50.189 rmmod nvme_fabrics 00:24:50.189 rmmod nvme_keyring 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1405943 ']' 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1405943 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1405943 ']' 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1405943 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1405943 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1405943' 00:24:50.189 killing process with pid 1405943 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1405943 00:24:50.189 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1405943 00:24:50.447 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:50.447 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:50.447 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:50.447 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:50.447 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:50.447 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:50.447 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:50.448 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:50.448 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:50.448 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.448 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.448 21:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:52.982 00:24:52.982 real 0m38.066s 00:24:52.982 user 2m0.382s 00:24:52.982 sys 0m8.005s 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:52.982 ************************************ 00:24:52.982 END TEST nvmf_failover 00:24:52.982 ************************************ 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.982 ************************************ 00:24:52.982 START TEST nvmf_host_discovery 00:24:52.982 ************************************ 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:52.982 * Looking for test storage... 00:24:52.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:52.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.982 --rc genhtml_branch_coverage=1 00:24:52.982 --rc genhtml_function_coverage=1 00:24:52.982 --rc genhtml_legend=1 00:24:52.982 --rc geninfo_all_blocks=1 00:24:52.982 --rc geninfo_unexecuted_blocks=1 00:24:52.982 00:24:52.982 ' 00:24:52.982 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:52.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.982 --rc genhtml_branch_coverage=1 00:24:52.982 --rc genhtml_function_coverage=1 00:24:52.982 --rc genhtml_legend=1 00:24:52.982 --rc geninfo_all_blocks=1 00:24:52.982 --rc geninfo_unexecuted_blocks=1 00:24:52.982 00:24:52.982 ' 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:52.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.983 --rc genhtml_branch_coverage=1 00:24:52.983 --rc genhtml_function_coverage=1 00:24:52.983 --rc genhtml_legend=1 00:24:52.983 --rc geninfo_all_blocks=1 00:24:52.983 --rc geninfo_unexecuted_blocks=1 00:24:52.983 00:24:52.983 ' 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:52.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.983 --rc genhtml_branch_coverage=1 00:24:52.983 --rc genhtml_function_coverage=1 00:24:52.983 --rc genhtml_legend=1 00:24:52.983 --rc geninfo_all_blocks=1 00:24:52.983 --rc geninfo_unexecuted_blocks=1 00:24:52.983 00:24:52.983 ' 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:52.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:52.983 21:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:59.561 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:59.562 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:59.562 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:59.562 Found net devices under 0000:86:00.0: cvl_0_0 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:59.562 Found net devices under 0000:86:00.1: cvl_0_1 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:59.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:24:59.562 00:24:59.562 --- 10.0.0.2 ping statistics --- 00:24:59.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.562 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:24:59.562 00:24:59.562 --- 10.0.0.1 ping statistics --- 00:24:59.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.562 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1414451 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1414451 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1414451 ']' 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.562 21:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.562 [2024-12-05 21:18:06.786082] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:24:59.562 [2024-12-05 21:18:06.786128] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.562 [2024-12-05 21:18:06.862857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.562 [2024-12-05 21:18:06.904545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.562 [2024-12-05 21:18:06.904581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.562 [2024-12-05 21:18:06.904588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.562 [2024-12-05 21:18:06.904595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.563 [2024-12-05 21:18:06.904600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.563 [2024-12-05 21:18:06.905138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.563 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.563 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:59.563 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:59.563 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:59.563 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.563 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.563 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:59.563 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.563 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.563 [2024-12-05 21:18:07.662866] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.822 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.822 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:59.822 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.822 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.822 [2024-12-05 21:18:07.675035] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:59.822 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.822 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:59.822 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.822 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.822 null0 00:24:59.823 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.823 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:59.823 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.823 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.823 null1 00:24:59.823 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.823 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:59.823 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.823 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.823 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.823 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1414611 00:24:59.823 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:59.823 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1414611 /tmp/host.sock 00:24:59.823 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1414611 ']' 00:24:59.823 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:59.823 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.823 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:59.823 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:59.823 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.823 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.823 [2024-12-05 21:18:07.756263] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:24:59.823 [2024-12-05 21:18:07.756309] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1414611 ] 00:24:59.823 [2024-12-05 21:18:07.813217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.823 [2024-12-05 21:18:07.857412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:00.082 21:18:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:00.082 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:00.083 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.342 [2024-12-05 21:18:08.276610] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.342 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.343 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.343 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:00.343 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:00.343 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:00.343 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:00.343 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:00.343 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:00.343 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:00.343 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:00.343 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.343 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:00.343 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.343 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:00.601 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.601 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:00.601 21:18:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:01.169 [2024-12-05 21:18:08.989149] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:01.169 [2024-12-05 21:18:08.989170] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:01.169 [2024-12-05 21:18:08.989182] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:01.169 [2024-12-05 21:18:09.076471] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:01.427 [2024-12-05 21:18:09.299597] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:01.427 [2024-12-05 21:18:09.300513] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x133f920:1 started. 00:25:01.427 [2024-12-05 21:18:09.301932] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:01.427 [2024-12-05 21:18:09.301949] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:01.427 [2024-12-05 21:18:09.308506] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x133f920 was disconnected and freed. delete nvme_qpair. 00:25:01.427 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:01.427 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:01.427 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:01.427 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:01.427 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:01.427 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.427 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:01.427 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.427 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:01.428 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.428 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.428 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:01.428 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:01.428 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:01.428 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:01.428 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:01.687 [2024-12-05 21:18:09.682072] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x133fca0:1 started. 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.687 [2024-12-05 21:18:09.689292] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x133fca0 was disconnected and freed. delete nvme_qpair. 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:01.687 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.688 [2024-12-05 21:18:09.780991] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:01.688 [2024-12-05 21:18:09.781899] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:01.688 [2024-12-05 21:18:09.781921] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.688 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.947 [2024-12-05 21:18:09.868166] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:01.947 21:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:01.947 [2024-12-05 21:18:09.933938] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:01.947 [2024-12-05 21:18:09.933976] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:01.947 [2024-12-05 21:18:09.933985] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:01.947 [2024-12-05 21:18:09.933990] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.882 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:03.142 21:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.142 [2024-12-05 21:18:11.032899] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:03.142 [2024-12-05 21:18:11.032923] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:03.142 [2024-12-05 21:18:11.033947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.142 [2024-12-05 21:18:11.033965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.142 [2024-12-05 21:18:11.033973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.142 [2024-12-05 21:18:11.033980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.142 [2024-12-05 21:18:11.033987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.142 [2024-12-05 21:18:11.033994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.142 [2024-12-05 21:18:11.034001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.142 [2024-12-05 21:18:11.034008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.142 [2024-12-05 21:18:11.034014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311930 is same with the state(6) to be set 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.142 [2024-12-05 21:18:11.043960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1311930 (9): Bad file descriptor 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:03.142 [2024-12-05 21:18:11.053995] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:03.142 [2024-12-05 21:18:11.054008] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:03.142 [2024-12-05 21:18:11.054015] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:03.142 [2024-12-05 21:18:11.054024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:03.142 [2024-12-05 21:18:11.054042] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:03.142 [2024-12-05 21:18:11.054266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.142 [2024-12-05 21:18:11.054280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1311930 with addr=10.0.0.2, port=4420 00:25:03.142 [2024-12-05 21:18:11.054289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311930 is same with the state(6) to be set 00:25:03.142 [2024-12-05 21:18:11.054300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1311930 (9): Bad file descriptor 00:25:03.142 [2024-12-05 21:18:11.054311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:03.142 [2024-12-05 21:18:11.054317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:03.142 [2024-12-05 21:18:11.054326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:03.142 [2024-12-05 21:18:11.054332] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:03.142 [2024-12-05 21:18:11.054337] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:03.142 [2024-12-05 21:18:11.054341] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.142 [2024-12-05 21:18:11.064073] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:03.142 [2024-12-05 21:18:11.064085] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:03.142 [2024-12-05 21:18:11.064089] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:03.142 [2024-12-05 21:18:11.064093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:03.142 [2024-12-05 21:18:11.064109] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:03.142 [2024-12-05 21:18:11.064288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.142 [2024-12-05 21:18:11.064300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1311930 with addr=10.0.0.2, port=4420 00:25:03.142 [2024-12-05 21:18:11.064308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311930 is same with the state(6) to be set 00:25:03.142 [2024-12-05 21:18:11.064318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1311930 (9): Bad file descriptor 00:25:03.142 [2024-12-05 21:18:11.064328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:03.142 [2024-12-05 21:18:11.064334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:03.142 [2024-12-05 21:18:11.064340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:03.142 [2024-12-05 21:18:11.064346] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:03.142 [2024-12-05 21:18:11.064350] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:03.142 [2024-12-05 21:18:11.064354] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:03.142 [2024-12-05 21:18:11.074140] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:03.142 [2024-12-05 21:18:11.074151] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:03.142 [2024-12-05 21:18:11.074155] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:03.142 [2024-12-05 21:18:11.074159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:03.142 [2024-12-05 21:18:11.074171] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:03.142 [2024-12-05 21:18:11.074411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.142 [2024-12-05 21:18:11.074424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1311930 with addr=10.0.0.2, port=4420 00:25:03.142 [2024-12-05 21:18:11.074431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311930 is same with the state(6) to be set 00:25:03.142 [2024-12-05 21:18:11.074441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1311930 (9): Bad file descriptor 00:25:03.142 [2024-12-05 21:18:11.074451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:03.142 [2024-12-05 21:18:11.074457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:03.142 [2024-12-05 21:18:11.074464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:03.142 [2024-12-05 21:18:11.074469] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:03.142 [2024-12-05 21:18:11.074473] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:03.142 [2024-12-05 21:18:11.074477] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:03.142 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:03.143 [2024-12-05 21:18:11.084202] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:03.143 [2024-12-05 21:18:11.084218] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:03.143 [2024-12-05 21:18:11.084223] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:03.143 [2024-12-05 21:18:11.084227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:03.143 [2024-12-05 21:18:11.084242] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:03.143 [2024-12-05 21:18:11.084359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.143 [2024-12-05 21:18:11.084376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1311930 with addr=10.0.0.2, port=4420 00:25:03.143 [2024-12-05 21:18:11.084383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311930 is same with the state(6) to be set 00:25:03.143 [2024-12-05 21:18:11.084394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1311930 (9): Bad file descriptor 00:25:03.143 [2024-12-05 21:18:11.084404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:03.143 [2024-12-05 21:18:11.084410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:03.143 [2024-12-05 21:18:11.084417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:03.143 [2024-12-05 21:18:11.084423] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:03.143 [2024-12-05 21:18:11.084429] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:03.143 [2024-12-05 21:18:11.084433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:03.143 [2024-12-05 21:18:11.094273] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:03.143 [2024-12-05 21:18:11.094286] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:03.143 [2024-12-05 21:18:11.094290] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:03.143 [2024-12-05 21:18:11.094294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:03.143 [2024-12-05 21:18:11.094308] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:03.143 [2024-12-05 21:18:11.094465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.143 [2024-12-05 21:18:11.094477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1311930 with addr=10.0.0.2, port=4420 00:25:03.143 [2024-12-05 21:18:11.094490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311930 is same with the state(6) to be set 00:25:03.143 [2024-12-05 21:18:11.094501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1311930 (9): Bad file descriptor 00:25:03.143 [2024-12-05 21:18:11.094510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:03.143 [2024-12-05 21:18:11.094517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:03.143 [2024-12-05 21:18:11.094523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:03.143 [2024-12-05 21:18:11.094529] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:03.143 [2024-12-05 21:18:11.094533] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:03.143 [2024-12-05 21:18:11.094537] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:03.143 [2024-12-05 21:18:11.104338] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:03.143 [2024-12-05 21:18:11.104348] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:03.143 [2024-12-05 21:18:11.104352] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:03.143 [2024-12-05 21:18:11.104356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:03.143 [2024-12-05 21:18:11.104371] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:03.143 [2024-12-05 21:18:11.104521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.143 [2024-12-05 21:18:11.104532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1311930 with addr=10.0.0.2, port=4420 00:25:03.143 [2024-12-05 21:18:11.104538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311930 is same with the state(6) to be set 00:25:03.143 [2024-12-05 21:18:11.104548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1311930 (9): Bad file descriptor 00:25:03.143 [2024-12-05 21:18:11.104563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:03.143 [2024-12-05 21:18:11.104570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:03.143 [2024-12-05 21:18:11.104577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:03.143 [2024-12-05 21:18:11.104582] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:03.143 [2024-12-05 21:18:11.104587] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:03.143 [2024-12-05 21:18:11.104591] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:03.143 [2024-12-05 21:18:11.114401] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:03.143 [2024-12-05 21:18:11.114411] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:03.143 [2024-12-05 21:18:11.114415] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:03.143 [2024-12-05 21:18:11.114418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:03.143 [2024-12-05 21:18:11.114430] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:03.143 [2024-12-05 21:18:11.114713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.143 [2024-12-05 21:18:11.114728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1311930 with addr=10.0.0.2, port=4420 00:25:03.143 [2024-12-05 21:18:11.114735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311930 is same with the state(6) to be set 00:25:03.143 [2024-12-05 21:18:11.114745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1311930 (9): Bad file descriptor 00:25:03.143 [2024-12-05 21:18:11.114761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:03.143 [2024-12-05 21:18:11.114767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:03.143 [2024-12-05 21:18:11.114774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:03.143 [2024-12-05 21:18:11.114779] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:03.143 [2024-12-05 21:18:11.114783] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:03.143 [2024-12-05 21:18:11.114787] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.143 [2024-12-05 21:18:11.118777] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:03.143 [2024-12-05 21:18:11.118794] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:03.143 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:03.144 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.402 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:03.402 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.402 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:03.402 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.403 21:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.338 [2024-12-05 21:18:12.417829] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:04.338 [2024-12-05 21:18:12.417846] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:04.338 [2024-12-05 21:18:12.417856] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:04.597 [2024-12-05 21:18:12.505113] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:04.597 [2024-12-05 21:18:12.611867] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:04.597 [2024-12-05 21:18:12.612455] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x130caf0:1 started. 00:25:04.597 [2024-12-05 21:18:12.614066] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:04.597 [2024-12-05 21:18:12.614089] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:04.597 [2024-12-05 21:18:12.616910] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x130caf0 was disconnected and freed. delete nvme_qpair. 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.597 request: 00:25:04.597 { 00:25:04.597 "name": "nvme", 00:25:04.597 "trtype": "tcp", 00:25:04.597 "traddr": "10.0.0.2", 00:25:04.597 "adrfam": "ipv4", 00:25:04.597 "trsvcid": "8009", 00:25:04.597 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:04.597 "wait_for_attach": true, 00:25:04.597 "method": "bdev_nvme_start_discovery", 00:25:04.597 "req_id": 1 00:25:04.597 } 00:25:04.597 Got JSON-RPC error response 00:25:04.597 response: 00:25:04.597 { 00:25:04.597 "code": -17, 00:25:04.597 "message": "File exists" 00:25:04.597 } 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:04.597 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.856 request: 00:25:04.856 { 00:25:04.856 "name": "nvme_second", 00:25:04.856 "trtype": "tcp", 00:25:04.856 "traddr": "10.0.0.2", 00:25:04.856 "adrfam": "ipv4", 00:25:04.856 "trsvcid": "8009", 00:25:04.856 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:04.856 "wait_for_attach": true, 00:25:04.856 "method": "bdev_nvme_start_discovery", 00:25:04.856 "req_id": 1 00:25:04.856 } 00:25:04.856 Got JSON-RPC error response 00:25:04.856 response: 00:25:04.856 { 00:25:04.856 "code": -17, 00:25:04.856 "message": "File exists" 00:25:04.856 } 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.856 21:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.793 [2024-12-05 21:18:13.853516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.793 [2024-12-05 21:18:13.853541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1327cf0 with addr=10.0.0.2, port=8010 00:25:05.793 [2024-12-05 21:18:13.853553] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:05.793 [2024-12-05 21:18:13.853558] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:05.793 [2024-12-05 21:18:13.853564] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:07.168 [2024-12-05 21:18:14.855883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.168 [2024-12-05 21:18:14.855907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1327cf0 with addr=10.0.0.2, port=8010 00:25:07.168 [2024-12-05 21:18:14.855917] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:07.168 [2024-12-05 21:18:14.855923] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:07.168 [2024-12-05 21:18:14.855929] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:08.099 [2024-12-05 21:18:15.858116] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:08.099 request: 00:25:08.099 { 00:25:08.099 "name": "nvme_second", 00:25:08.099 "trtype": "tcp", 00:25:08.099 "traddr": "10.0.0.2", 00:25:08.099 "adrfam": "ipv4", 00:25:08.099 "trsvcid": "8010", 00:25:08.099 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:08.099 "wait_for_attach": false, 00:25:08.099 "attach_timeout_ms": 3000, 00:25:08.099 "method": "bdev_nvme_start_discovery", 00:25:08.099 "req_id": 1 00:25:08.099 } 00:25:08.099 Got JSON-RPC error response 00:25:08.099 response: 00:25:08.099 { 00:25:08.099 "code": -110, 00:25:08.099 "message": "Connection timed out" 00:25:08.099 } 00:25:08.099 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:08.099 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:08.099 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:08.099 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:08.099 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:08.099 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:08.099 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:08.099 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:08.099 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.099 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:08.099 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.099 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:08.099 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.099 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:08.099 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:08.099 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1414611 00:25:08.100 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:08.100 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:08.100 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:08.100 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:08.100 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:08.100 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:08.100 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:08.100 rmmod nvme_tcp 00:25:08.100 rmmod nvme_fabrics 00:25:08.100 rmmod nvme_keyring 00:25:08.100 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:08.100 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:08.100 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:08.100 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1414451 ']' 00:25:08.100 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1414451 00:25:08.100 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1414451 ']' 00:25:08.100 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1414451 00:25:08.100 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:08.100 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.100 21:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1414451 00:25:08.100 21:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:08.100 21:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:08.100 21:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1414451' 00:25:08.100 killing process with pid 1414451 00:25:08.100 21:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1414451 00:25:08.100 21:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1414451 00:25:08.100 21:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:08.100 21:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:08.100 21:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:08.100 21:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:08.100 21:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:08.100 21:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:08.100 21:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:08.100 21:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:08.100 21:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:08.100 21:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.100 21:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.100 21:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:10.773 00:25:10.773 real 0m17.676s 00:25:10.773 user 0m21.095s 00:25:10.773 sys 0m5.723s 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.773 ************************************ 00:25:10.773 END TEST nvmf_host_discovery 00:25:10.773 ************************************ 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.773 ************************************ 00:25:10.773 START TEST nvmf_host_multipath_status 00:25:10.773 ************************************ 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:10.773 * Looking for test storage... 00:25:10.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:10.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.773 --rc genhtml_branch_coverage=1 00:25:10.773 --rc genhtml_function_coverage=1 00:25:10.773 --rc genhtml_legend=1 00:25:10.773 --rc geninfo_all_blocks=1 00:25:10.773 --rc geninfo_unexecuted_blocks=1 00:25:10.773 00:25:10.773 ' 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:10.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.773 --rc genhtml_branch_coverage=1 00:25:10.773 --rc genhtml_function_coverage=1 00:25:10.773 --rc genhtml_legend=1 00:25:10.773 --rc geninfo_all_blocks=1 00:25:10.773 --rc geninfo_unexecuted_blocks=1 00:25:10.773 00:25:10.773 ' 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:10.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.773 --rc genhtml_branch_coverage=1 00:25:10.773 --rc genhtml_function_coverage=1 00:25:10.773 --rc genhtml_legend=1 00:25:10.773 --rc geninfo_all_blocks=1 00:25:10.773 --rc geninfo_unexecuted_blocks=1 00:25:10.773 00:25:10.773 ' 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:10.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.773 --rc genhtml_branch_coverage=1 00:25:10.773 --rc genhtml_function_coverage=1 00:25:10.773 --rc genhtml_legend=1 00:25:10.773 --rc geninfo_all_blocks=1 00:25:10.773 --rc geninfo_unexecuted_blocks=1 00:25:10.773 00:25:10.773 ' 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:10.773 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:10.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:10.774 21:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:17.347 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.347 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:17.347 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:17.347 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:17.347 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:17.347 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:17.347 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:17.347 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:17.347 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:17.347 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:17.347 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:17.347 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:17.347 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:17.347 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:17.347 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:17.347 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.347 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:17.348 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:17.348 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:17.348 Found net devices under 0000:86:00.0: cvl_0_0 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:17.348 Found net devices under 0000:86:00.1: cvl_0_1 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:17.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:25:17.348 00:25:17.348 --- 10.0.0.2 ping statistics --- 00:25:17.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.348 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:25:17.348 00:25:17.348 --- 10.0.0.1 ping statistics --- 00:25:17.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.348 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1419948 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1419948 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1419948 ']' 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:17.348 [2024-12-05 21:18:24.549723] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:25:17.348 [2024-12-05 21:18:24.549771] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.348 [2024-12-05 21:18:24.630400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:17.348 [2024-12-05 21:18:24.672103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.348 [2024-12-05 21:18:24.672139] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.348 [2024-12-05 21:18:24.672147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.348 [2024-12-05 21:18:24.672153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.348 [2024-12-05 21:18:24.672158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.348 [2024-12-05 21:18:24.673394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.348 [2024-12-05 21:18:24.673396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1419948 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:17.348 [2024-12-05 21:18:24.975425] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.348 21:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:17.348 Malloc0 00:25:17.348 21:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:17.348 21:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:17.606 21:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:17.863 [2024-12-05 21:18:25.742799] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.863 21:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:17.863 [2024-12-05 21:18:25.927302] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:17.863 21:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1420199 00:25:17.863 21:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:17.863 21:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:17.863 21:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1420199 /var/tmp/bdevperf.sock 00:25:17.863 21:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1420199 ']' 00:25:17.863 21:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:17.864 21:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.864 21:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:17.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:17.864 21:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.864 21:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:18.121 21:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.121 21:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:18.121 21:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:18.378 21:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:18.943 Nvme0n1 00:25:18.944 21:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:19.510 Nvme0n1 00:25:19.510 21:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:19.510 21:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:21.414 21:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:21.414 21:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:21.672 21:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:21.931 21:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:22.868 21:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:22.868 21:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:22.868 21:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.868 21:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:23.126 21:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.126 21:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:23.126 21:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.126 21:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:23.126 21:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:23.126 21:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:23.126 21:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:23.126 21:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.384 21:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.384 21:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:23.384 21:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.384 21:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:23.643 21:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.643 21:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:23.643 21:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.643 21:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:23.902 21:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.902 21:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:23.902 21:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.902 21:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:24.161 21:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.161 21:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:24.161 21:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:24.161 21:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:24.420 21:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:25.358 21:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:25.359 21:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:25.359 21:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.359 21:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:25.618 21:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:25.618 21:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:25.618 21:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.618 21:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:25.877 21:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.877 21:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:25.877 21:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:25.877 21:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.136 21:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.136 21:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:26.136 21:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.136 21:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:26.395 21:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.395 21:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:26.395 21:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.395 21:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:26.655 21:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.655 21:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:26.655 21:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.655 21:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:26.655 21:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.655 21:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:26.655 21:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:26.914 21:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:27.183 21:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:28.119 21:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:28.119 21:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:28.119 21:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.119 21:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:28.377 21:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.377 21:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:28.377 21:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.377 21:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:28.635 21:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:28.635 21:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:28.635 21:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.635 21:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:28.895 21:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.895 21:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:28.895 21:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:28.895 21:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.154 21:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.154 21:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:29.154 21:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.154 21:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:29.154 21:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.154 21:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:29.154 21:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.154 21:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:29.414 21:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.414 21:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:29.414 21:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:29.673 21:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:29.932 21:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:30.868 21:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:30.868 21:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:30.868 21:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.868 21:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:31.127 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.127 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:31.127 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.127 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:31.386 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:31.386 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:31.386 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.386 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:31.386 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.386 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:31.386 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.386 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:31.645 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.645 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:31.645 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.645 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:31.904 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.904 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:31.904 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.904 21:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:32.162 21:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.163 21:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:32.163 21:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:32.421 21:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:32.421 21:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:33.799 21:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:33.799 21:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:33.799 21:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.799 21:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:33.799 21:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:33.799 21:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:33.799 21:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.799 21:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:34.058 21:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:34.058 21:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:34.058 21:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.058 21:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:34.058 21:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.058 21:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:34.058 21:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.058 21:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:34.317 21:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.317 21:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:34.317 21:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.317 21:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:34.576 21:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:34.576 21:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:34.576 21:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.576 21:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:34.835 21:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:34.835 21:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:34.836 21:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:34.836 21:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:35.094 21:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:36.030 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:36.030 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:36.030 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.030 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:36.290 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:36.290 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:36.290 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.290 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:36.549 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.549 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:36.549 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.549 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:36.808 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.808 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:36.808 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.808 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:37.068 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.068 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:37.068 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.068 21:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:37.068 21:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:37.068 21:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:37.068 21:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.068 21:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:37.326 21:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.326 21:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:37.584 21:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:37.584 21:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:37.842 21:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:38.100 21:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:39.035 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:39.035 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:39.035 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:39.035 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.294 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.294 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:39.294 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.294 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:39.552 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.552 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:39.552 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.552 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:39.811 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.811 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:39.811 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:39.811 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.811 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.811 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:39.811 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:39.811 21:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.070 21:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.070 21:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:40.070 21:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.070 21:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:40.327 21:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.328 21:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:40.328 21:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:40.586 21:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:40.845 21:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:41.780 21:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:41.780 21:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:41.780 21:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.780 21:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:42.039 21:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:42.039 21:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:42.039 21:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.039 21:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:42.298 21:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.298 21:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:42.298 21:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.298 21:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:42.298 21:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.298 21:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:42.298 21:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.298 21:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:42.556 21:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.556 21:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:42.556 21:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.556 21:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:42.816 21:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.816 21:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:42.816 21:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.816 21:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:43.075 21:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.075 21:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:43.075 21:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:43.333 21:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:43.333 21:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:44.712 21:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:44.712 21:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:44.712 21:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.712 21:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:44.712 21:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.712 21:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:44.712 21:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.712 21:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:44.971 21:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.971 21:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:44.971 21:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.971 21:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:44.971 21:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.971 21:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:44.971 21:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.971 21:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:45.230 21:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.230 21:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:45.230 21:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.230 21:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:45.489 21:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.489 21:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:45.489 21:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.489 21:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:45.748 21:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.748 21:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:45.748 21:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:46.006 21:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:46.266 21:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:47.210 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:47.210 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:47.210 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.210 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:47.475 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.475 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:47.475 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.475 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:47.475 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:47.475 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:47.475 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.475 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:47.733 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.733 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:47.733 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.733 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:47.990 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.990 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:47.990 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.990 21:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:48.248 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.248 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:48.248 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.248 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:48.521 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:48.521 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1420199 00:25:48.521 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1420199 ']' 00:25:48.521 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1420199 00:25:48.521 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:48.521 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:48.521 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1420199 00:25:48.521 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:48.521 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:48.521 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1420199' 00:25:48.521 killing process with pid 1420199 00:25:48.521 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1420199 00:25:48.521 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1420199 00:25:48.521 { 00:25:48.521 "results": [ 00:25:48.521 { 00:25:48.521 "job": "Nvme0n1", 00:25:48.521 "core_mask": "0x4", 00:25:48.521 "workload": "verify", 00:25:48.521 "status": "terminated", 00:25:48.521 "verify_range": { 00:25:48.521 "start": 0, 00:25:48.521 "length": 16384 00:25:48.521 }, 00:25:48.521 "queue_depth": 128, 00:25:48.521 "io_size": 4096, 00:25:48.521 "runtime": 28.929079, 00:25:48.521 "iops": 10706.735599844018, 00:25:48.521 "mibps": 41.823185936890695, 00:25:48.521 "io_failed": 0, 00:25:48.521 "io_timeout": 0, 00:25:48.521 "avg_latency_us": 11935.071800574868, 00:25:48.521 "min_latency_us": 124.83047619047619, 00:25:48.521 "max_latency_us": 3019898.88 00:25:48.521 } 00:25:48.521 ], 00:25:48.521 "core_count": 1 00:25:48.521 } 00:25:48.521 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1420199 00:25:48.521 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:48.521 [2024-12-05 21:18:25.988723] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:25:48.521 [2024-12-05 21:18:25.988775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420199 ] 00:25:48.521 [2024-12-05 21:18:26.048349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.521 [2024-12-05 21:18:26.091332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:48.521 Running I/O for 90 seconds... 00:25:48.521 11375.00 IOPS, 44.43 MiB/s [2024-12-05T20:18:56.629Z] 11392.00 IOPS, 44.50 MiB/s [2024-12-05T20:18:56.629Z] 11419.00 IOPS, 44.61 MiB/s [2024-12-05T20:18:56.629Z] 11440.00 IOPS, 44.69 MiB/s [2024-12-05T20:18:56.629Z] 11453.00 IOPS, 44.74 MiB/s [2024-12-05T20:18:56.629Z] 11441.67 IOPS, 44.69 MiB/s [2024-12-05T20:18:56.629Z] 11466.43 IOPS, 44.79 MiB/s [2024-12-05T20:18:56.629Z] 11473.50 IOPS, 44.82 MiB/s [2024-12-05T20:18:56.629Z] 11478.22 IOPS, 44.84 MiB/s [2024-12-05T20:18:56.629Z] 11474.60 IOPS, 44.82 MiB/s [2024-12-05T20:18:56.629Z] 11466.82 IOPS, 44.79 MiB/s [2024-12-05T20:18:56.629Z] 11465.25 IOPS, 44.79 MiB/s [2024-12-05T20:18:56.629Z] [2024-12-05 21:18:40.283290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.521 [2024-12-05 21:18:40.283328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:48.521 [2024-12-05 21:18:40.283363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.521 [2024-12-05 21:18:40.283376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:48.521 [2024-12-05 21:18:40.283390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.521 [2024-12-05 21:18:40.283398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:48.521 [2024-12-05 21:18:40.283410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.521 [2024-12-05 21:18:40.283417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:48.521 [2024-12-05 21:18:40.283430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.521 [2024-12-05 21:18:40.283437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:48.521 [2024-12-05 21:18:40.283450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.521 [2024-12-05 21:18:40.283458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:48.521 [2024-12-05 21:18:40.283470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.521 [2024-12-05 21:18:40.283477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:48.521 [2024-12-05 21:18:40.283490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:127784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.521 [2024-12-05 21:18:40.283497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.522 [2024-12-05 21:18:40.284052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.522 [2024-12-05 21:18:40.284080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:127184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:127200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.522 [2024-12-05 21:18:40.284611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.522 [2024-12-05 21:18:40.284634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.522 [2024-12-05 21:18:40.284655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:48.522 [2024-12-05 21:18:40.284669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.522 [2024-12-05 21:18:40.284676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.284690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.523 [2024-12-05 21:18:40.284697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.284710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.523 [2024-12-05 21:18:40.284717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.284731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.523 [2024-12-05 21:18:40.284738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.284807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.523 [2024-12-05 21:18:40.284816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.284831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.284839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.284853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.284860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.284874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.284881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.284896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.284903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.284917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.284924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.284939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.284947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.284961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.284968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.284982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.284989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.285011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.285032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.285059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.285080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.285102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.285123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.285144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.285165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.285225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.285249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:127360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.285272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.285294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.285316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.285338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.285360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.523 [2024-12-05 21:18:40.285387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.523 [2024-12-05 21:18:40.285409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:127872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.523 [2024-12-05 21:18:40.285432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:48.523 [2024-12-05 21:18:40.285447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.523 [2024-12-05 21:18:40.285455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.285469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.285477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.285492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.285499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.285514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.285522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.285538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.285545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.286265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.286288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.286311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.286334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:127952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.286356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.286384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.286406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:127976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.286429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.286453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.286476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.286502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.286526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.286550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.286573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.286597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.524 [2024-12-05 21:18:40.286619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.524 [2024-12-05 21:18:40.286643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.524 [2024-12-05 21:18:40.286705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:127424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.524 [2024-12-05 21:18:40.286729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.524 [2024-12-05 21:18:40.286753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.524 [2024-12-05 21:18:40.286777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.524 [2024-12-05 21:18:40.286801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.524 [2024-12-05 21:18:40.286825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.524 [2024-12-05 21:18:40.286851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.524 [2024-12-05 21:18:40.286875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:48.524 [2024-12-05 21:18:40.286891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.524 [2024-12-05 21:18:40.286898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.286916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.286922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.286939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.286946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.286963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.286970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.286986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.286993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:127640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:48.525 [2024-12-05 21:18:40.287618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.525 [2024-12-05 21:18:40.287625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:48.525 11303.85 IOPS, 44.16 MiB/s [2024-12-05T20:18:56.633Z] 10496.43 IOPS, 41.00 MiB/s [2024-12-05T20:18:56.633Z] 9796.67 IOPS, 38.27 MiB/s [2024-12-05T20:18:56.633Z] 9324.69 IOPS, 36.42 MiB/s [2024-12-05T20:18:56.633Z] 9457.18 IOPS, 36.94 MiB/s [2024-12-05T20:18:56.633Z] 9577.56 IOPS, 37.41 MiB/s [2024-12-05T20:18:56.633Z] 9755.58 IOPS, 38.11 MiB/s [2024-12-05T20:18:56.633Z] 9947.75 IOPS, 38.86 MiB/s [2024-12-05T20:18:56.633Z] 10125.86 IOPS, 39.55 MiB/s [2024-12-05T20:18:56.633Z] 10199.23 IOPS, 39.84 MiB/s [2024-12-05T20:18:56.633Z] 10250.78 IOPS, 40.04 MiB/s [2024-12-05T20:18:56.633Z] 10305.75 IOPS, 40.26 MiB/s [2024-12-05T20:18:56.633Z] 10446.12 IOPS, 40.81 MiB/s [2024-12-05T20:18:56.633Z] 10573.27 IOPS, 41.30 MiB/s [2024-12-05T20:18:56.634Z] [2024-12-05 21:18:54.113577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.113616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.113637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.113645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.113662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.113669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.113682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.113689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.113701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.113708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:48.526 [2024-12-05 21:18:54.114679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.526 [2024-12-05 21:18:54.114686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.114697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.114707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.114720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.114727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.114739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.114745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.114757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.114764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.114777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.114783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.114796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.114802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.114814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.114821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.114834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.114840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.114853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.114860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.114873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.114880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.114893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.114899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.114912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.114919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.114931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.114939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.114952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.114958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.114971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.114978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.114990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.114996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.115009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.115016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.115028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.115035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.115047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.115054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.115066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.527 [2024-12-05 21:18:54.115073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.115086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.527 [2024-12-05 21:18:54.115092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.115105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.527 [2024-12-05 21:18:54.115111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.115124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.115131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.115143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.115150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.115162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.115169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.115182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.115189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.115202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.115209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.115221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.115228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.115240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.527 [2024-12-05 21:18:54.115247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:48.527 [2024-12-05 21:18:54.115259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.528 [2024-12-05 21:18:54.115266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.115278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.528 [2024-12-05 21:18:54.115285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.115297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.528 [2024-12-05 21:18:54.115304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.115316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.528 [2024-12-05 21:18:54.115323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.115710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.115722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.115737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.115744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.115756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.115763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.115776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.115783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.115797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.115804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.115817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.115824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.115837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.115843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.115856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.115863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.115875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.115882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.115894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.115901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.115914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.115921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.115933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.115940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.115952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.115959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.115971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.115978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.115991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.115997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.116010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.116016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.116028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.116037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.116049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.116056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.116068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.116075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.117212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.117229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.117243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.117250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.117263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.117270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.117283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.117289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.117302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.117309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.117321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.117328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.117340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.117347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:48.528 [2024-12-05 21:18:54.117359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.528 [2024-12-05 21:18:54.117371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.117384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.529 [2024-12-05 21:18:54.117391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.117403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.529 [2024-12-05 21:18:54.117413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.117425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.529 [2024-12-05 21:18:54.117432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.117444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.529 [2024-12-05 21:18:54.117450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.117463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.529 [2024-12-05 21:18:54.117470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.117484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.529 [2024-12-05 21:18:54.117491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.117503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.529 [2024-12-05 21:18:54.117510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.117523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.529 [2024-12-05 21:18:54.117530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.117542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.529 [2024-12-05 21:18:54.117549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.117561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.529 [2024-12-05 21:18:54.117568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.117581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.529 [2024-12-05 21:18:54.117588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.117600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.529 [2024-12-05 21:18:54.117608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.117620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.529 [2024-12-05 21:18:54.117627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.117640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.529 [2024-12-05 21:18:54.117647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.118562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.529 [2024-12-05 21:18:54.118578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.118592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.529 [2024-12-05 21:18:54.118600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.118612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.529 [2024-12-05 21:18:54.118619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.118632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.529 [2024-12-05 21:18:54.118639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.118652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.529 [2024-12-05 21:18:54.118658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.118671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.529 [2024-12-05 21:18:54.118679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.118691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.529 [2024-12-05 21:18:54.118698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.118713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.529 [2024-12-05 21:18:54.118720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.118733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.529 [2024-12-05 21:18:54.118740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.118753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.529 [2024-12-05 21:18:54.118760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.118772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.529 [2024-12-05 21:18:54.118779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.118792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.529 [2024-12-05 21:18:54.118799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.118814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.529 [2024-12-05 21:18:54.118821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:48.529 [2024-12-05 21:18:54.118833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.529 [2024-12-05 21:18:54.118841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.118854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.530 [2024-12-05 21:18:54.118861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.118873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.530 [2024-12-05 21:18:54.118880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.118893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.530 [2024-12-05 21:18:54.118900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.530 [2024-12-05 21:18:54.119050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.530 [2024-12-05 21:18:54.119070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.530 [2024-12-05 21:18:54.119089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.530 [2024-12-05 21:18:54.119109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.530 [2024-12-05 21:18:54.119128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.530 [2024-12-05 21:18:54.119152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.530 [2024-12-05 21:18:54.119171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.530 [2024-12-05 21:18:54.119193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.530 [2024-12-05 21:18:54.119213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.530 [2024-12-05 21:18:54.119232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.530 [2024-12-05 21:18:54.119250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.530 [2024-12-05 21:18:54.119270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.530 [2024-12-05 21:18:54.119289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.530 [2024-12-05 21:18:54.119308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.530 [2024-12-05 21:18:54.119327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.530 [2024-12-05 21:18:54.119346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.530 [2024-12-05 21:18:54.119365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.530 [2024-12-05 21:18:54.119389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.530 [2024-12-05 21:18:54.119408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.530 [2024-12-05 21:18:54.119927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.530 [2024-12-05 21:18:54.119948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.530 [2024-12-05 21:18:54.119969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.119982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.530 [2024-12-05 21:18:54.119989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.120001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.530 [2024-12-05 21:18:54.120008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:48.530 [2024-12-05 21:18:54.120020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.530 [2024-12-05 21:18:54.120027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.531 [2024-12-05 21:18:54.120046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.531 [2024-12-05 21:18:54.120065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.531 [2024-12-05 21:18:54.120084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.531 [2024-12-05 21:18:54.120103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.531 [2024-12-05 21:18:54.120123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.531 [2024-12-05 21:18:54.120141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.531 [2024-12-05 21:18:54.120160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.531 [2024-12-05 21:18:54.120181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.531 [2024-12-05 21:18:54.120200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.531 [2024-12-05 21:18:54.120219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.531 [2024-12-05 21:18:54.120238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.531 [2024-12-05 21:18:54.120257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.531 [2024-12-05 21:18:54.120278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.531 [2024-12-05 21:18:54.120296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.531 [2024-12-05 21:18:54.120316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.531 [2024-12-05 21:18:54.120335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.531 [2024-12-05 21:18:54.120354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.531 [2024-12-05 21:18:54.120378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.531 [2024-12-05 21:18:54.120397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.531 [2024-12-05 21:18:54.120421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.531 [2024-12-05 21:18:54.120440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.531 [2024-12-05 21:18:54.120459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.531 [2024-12-05 21:18:54.120478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.531 [2024-12-05 21:18:54.120498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.531 [2024-12-05 21:18:54.120517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.531 [2024-12-05 21:18:54.120536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:48.531 [2024-12-05 21:18:54.120548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.531 [2024-12-05 21:18:54.120555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.120567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.532 [2024-12-05 21:18:54.120574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.120845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.532 [2024-12-05 21:18:54.120856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.120870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.532 [2024-12-05 21:18:54.120877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.120889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.532 [2024-12-05 21:18:54.120896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.120908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.532 [2024-12-05 21:18:54.120918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.120930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.532 [2024-12-05 21:18:54.120937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.120949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.532 [2024-12-05 21:18:54.120956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.120969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.532 [2024-12-05 21:18:54.120975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.120987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.532 [2024-12-05 21:18:54.120996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.121009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.532 [2024-12-05 21:18:54.121016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.121028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.532 [2024-12-05 21:18:54.121035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.121047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.532 [2024-12-05 21:18:54.121054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.121067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.532 [2024-12-05 21:18:54.121073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.121086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.532 [2024-12-05 21:18:54.121093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.122606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.532 [2024-12-05 21:18:54.122623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.122638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.532 [2024-12-05 21:18:54.122645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.122657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.532 [2024-12-05 21:18:54.122667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.122680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.532 [2024-12-05 21:18:54.122688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.122700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.532 [2024-12-05 21:18:54.122707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.122719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.532 [2024-12-05 21:18:54.122726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.122739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.532 [2024-12-05 21:18:54.122746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.122758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.532 [2024-12-05 21:18:54.122765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.122777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.532 [2024-12-05 21:18:54.122784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.122796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.532 [2024-12-05 21:18:54.122803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.122815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.532 [2024-12-05 21:18:54.122823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:48.532 [2024-12-05 21:18:54.122835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.533 [2024-12-05 21:18:54.122842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.122854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.533 [2024-12-05 21:18:54.122861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.122874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.533 [2024-12-05 21:18:54.122880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.122893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.533 [2024-12-05 21:18:54.122900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.122914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.533 [2024-12-05 21:18:54.122920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.122933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.533 [2024-12-05 21:18:54.122940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.533 [2024-12-05 21:18:54.124530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.533 [2024-12-05 21:18:54.124553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.533 [2024-12-05 21:18:54.124572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.533 [2024-12-05 21:18:54.124592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.533 [2024-12-05 21:18:54.124611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.533 [2024-12-05 21:18:54.124631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.533 [2024-12-05 21:18:54.124649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.533 [2024-12-05 21:18:54.124669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.533 [2024-12-05 21:18:54.124688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.533 [2024-12-05 21:18:54.124707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.533 [2024-12-05 21:18:54.124730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.533 [2024-12-05 21:18:54.124749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.533 [2024-12-05 21:18:54.124768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.533 [2024-12-05 21:18:54.124787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.533 [2024-12-05 21:18:54.124806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.533 [2024-12-05 21:18:54.124825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.533 [2024-12-05 21:18:54.124843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.533 [2024-12-05 21:18:54.124863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.533 [2024-12-05 21:18:54.124882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.533 [2024-12-05 21:18:54.124901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.533 [2024-12-05 21:18:54.124920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:48.533 [2024-12-05 21:18:54.124932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.533 [2024-12-05 21:18:54.124940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.124952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.534 [2024-12-05 21:18:54.124961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.124973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.534 [2024-12-05 21:18:54.124980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.124993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.534 [2024-12-05 21:18:54.124999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.125011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.534 [2024-12-05 21:18:54.125018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.125031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.534 [2024-12-05 21:18:54.125037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.125049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.534 [2024-12-05 21:18:54.125056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.125068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.534 [2024-12-05 21:18:54.125075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.125088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.534 [2024-12-05 21:18:54.125094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.125106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.534 [2024-12-05 21:18:54.125114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.125126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.534 [2024-12-05 21:18:54.125132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.125145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.534 [2024-12-05 21:18:54.125152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.125164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.534 [2024-12-05 21:18:54.125171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.125183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.534 [2024-12-05 21:18:54.125191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.125204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.534 [2024-12-05 21:18:54.125210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.125223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.534 [2024-12-05 21:18:54.125230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.125242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.534 [2024-12-05 21:18:54.125249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.126057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.534 [2024-12-05 21:18:54.126071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.126085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.534 [2024-12-05 21:18:54.126092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.126105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.534 [2024-12-05 21:18:54.126111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.126123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.534 [2024-12-05 21:18:54.126130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.126143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.534 [2024-12-05 21:18:54.126149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.126161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.534 [2024-12-05 21:18:54.126168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.126180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.534 [2024-12-05 21:18:54.126187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.126199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.534 [2024-12-05 21:18:54.126206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.126218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.534 [2024-12-05 21:18:54.126225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.126240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.534 [2024-12-05 21:18:54.126247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.126259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.534 [2024-12-05 21:18:54.126266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:48.534 [2024-12-05 21:18:54.126278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.534 [2024-12-05 21:18:54.126285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.126297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.535 [2024-12-05 21:18:54.126304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.126316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.535 [2024-12-05 21:18:54.126323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.126335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.535 [2024-12-05 21:18:54.126342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.126354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.535 [2024-12-05 21:18:54.126361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.126377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.535 [2024-12-05 21:18:54.126385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.126397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.535 [2024-12-05 21:18:54.126404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.126416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.535 [2024-12-05 21:18:54.126423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.126436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.535 [2024-12-05 21:18:54.126442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.126454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.535 [2024-12-05 21:18:54.126461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.126477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.535 [2024-12-05 21:18:54.126484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.126496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.535 [2024-12-05 21:18:54.126503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.126516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.535 [2024-12-05 21:18:54.126523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.126534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.535 [2024-12-05 21:18:54.126541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.126554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.535 [2024-12-05 21:18:54.126560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.126573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.535 [2024-12-05 21:18:54.134571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.134593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.535 [2024-12-05 21:18:54.134603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.135168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.535 [2024-12-05 21:18:54.135185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.135205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.535 [2024-12-05 21:18:54.135214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.135231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.535 [2024-12-05 21:18:54.135241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.135259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.535 [2024-12-05 21:18:54.135268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.135285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.535 [2024-12-05 21:18:54.135294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.135312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.535 [2024-12-05 21:18:54.135325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.135342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.535 [2024-12-05 21:18:54.135352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.135377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.535 [2024-12-05 21:18:54.135387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.135403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.535 [2024-12-05 21:18:54.135413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.135429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.535 [2024-12-05 21:18:54.135439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.135455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.535 [2024-12-05 21:18:54.135465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.135482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.535 [2024-12-05 21:18:54.135491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.137307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.535 [2024-12-05 21:18:54.137327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.137347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.535 [2024-12-05 21:18:54.137357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.137381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.535 [2024-12-05 21:18:54.137391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.137408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.535 [2024-12-05 21:18:54.137417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:48.535 [2024-12-05 21:18:54.137434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.535 [2024-12-05 21:18:54.137444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.137473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.137500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.137526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.137552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.137578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.137604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.536 [2024-12-05 21:18:54.137630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.536 [2024-12-05 21:18:54.137656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.536 [2024-12-05 21:18:54.137682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.536 [2024-12-05 21:18:54.137708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.536 [2024-12-05 21:18:54.137734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.536 [2024-12-05 21:18:54.137760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.536 [2024-12-05 21:18:54.137786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.536 [2024-12-05 21:18:54.137813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.137839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.137865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.536 [2024-12-05 21:18:54.137892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.536 [2024-12-05 21:18:54.137918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.536 [2024-12-05 21:18:54.137944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.536 [2024-12-05 21:18:54.137970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.137987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.137996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.138013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.138022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.138039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.138049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.138065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.138075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.138092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.138101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.138119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.536 [2024-12-05 21:18:54.138129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.138147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.138156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.139635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.139657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.139677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.536 [2024-12-05 21:18:54.139686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.139703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.536 [2024-12-05 21:18:54.139712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.139729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.139740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.139758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.139767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.139784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.139794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.139811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.139821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:48.536 [2024-12-05 21:18:54.139839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.536 [2024-12-05 21:18:54.139848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.139866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.537 [2024-12-05 21:18:54.139875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.139892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.537 [2024-12-05 21:18:54.139902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.139922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.537 [2024-12-05 21:18:54.139932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.139949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.139960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.139977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.139987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.140005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.140015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.140032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.140041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.140058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.140070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.141155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.141183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.141209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.141235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.141261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.537 [2024-12-05 21:18:54.141288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.537 [2024-12-05 21:18:54.141317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.537 [2024-12-05 21:18:54.141343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.141375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.537 [2024-12-05 21:18:54.141401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.141427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.537 [2024-12-05 21:18:54.141453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.537 [2024-12-05 21:18:54.141479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.537 [2024-12-05 21:18:54.141505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.537 [2024-12-05 21:18:54.141531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.141557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.141583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.141608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.141639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.537 [2024-12-05 21:18:54.141665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.141691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.141717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.537 [2024-12-05 21:18:54.141743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.537 [2024-12-05 21:18:54.141769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.537 [2024-12-05 21:18:54.141795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.537 [2024-12-05 21:18:54.141821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.537 [2024-12-05 21:18:54.141848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.141874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:48.537 [2024-12-05 21:18:54.141890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.537 [2024-12-05 21:18:54.141899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.141916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.538 [2024-12-05 21:18:54.141925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.141941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.538 [2024-12-05 21:18:54.141951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.141969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.538 [2024-12-05 21:18:54.141979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.141995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.538 [2024-12-05 21:18:54.142005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.142021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.538 [2024-12-05 21:18:54.142030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.142047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.538 [2024-12-05 21:18:54.142056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.142072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.538 [2024-12-05 21:18:54.142082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.142098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.538 [2024-12-05 21:18:54.142107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.142124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.538 [2024-12-05 21:18:54.142133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.142149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.538 [2024-12-05 21:18:54.142158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.142175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.538 [2024-12-05 21:18:54.142185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.142201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.538 [2024-12-05 21:18:54.142210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.142227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.538 [2024-12-05 21:18:54.142236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.142253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.538 [2024-12-05 21:18:54.142262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.142280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.538 [2024-12-05 21:18:54.142289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.142306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.538 [2024-12-05 21:18:54.142315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.142331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.538 [2024-12-05 21:18:54.142340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.142357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.538 [2024-12-05 21:18:54.142370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.144297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.538 [2024-12-05 21:18:54.144315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.144331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.538 [2024-12-05 21:18:54.144339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.144353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.538 [2024-12-05 21:18:54.144360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.144380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.538 [2024-12-05 21:18:54.144388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.144402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.538 [2024-12-05 21:18:54.144409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.144423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.538 [2024-12-05 21:18:54.144430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.144444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.538 [2024-12-05 21:18:54.144451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.144464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.538 [2024-12-05 21:18:54.144472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.144485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.538 [2024-12-05 21:18:54.144496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.144509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.538 [2024-12-05 21:18:54.144517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.144530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.538 [2024-12-05 21:18:54.144538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.144551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.538 [2024-12-05 21:18:54.144558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.144572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.538 [2024-12-05 21:18:54.144579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.144592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.538 [2024-12-05 21:18:54.144600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.144613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.538 [2024-12-05 21:18:54.144621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.144634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.538 [2024-12-05 21:18:54.144641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:48.538 [2024-12-05 21:18:54.144655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.144662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.144675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.539 [2024-12-05 21:18:54.144683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.144696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.539 [2024-12-05 21:18:54.144704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.144717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.144724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.144738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.539 [2024-12-05 21:18:54.144746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.144760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.144767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.144781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.144788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.144801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.539 [2024-12-05 21:18:54.144809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.144822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.539 [2024-12-05 21:18:54.144830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.144843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.144851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.144864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.144871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.144885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.144892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.144906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.144914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.539 [2024-12-05 21:18:54.146148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.146171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.146193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.539 [2024-12-05 21:18:54.146214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.146239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.146260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.146281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.146302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.146323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.146344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.146366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.539 [2024-12-05 21:18:54.146391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.539 [2024-12-05 21:18:54.146412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.539 [2024-12-05 21:18:54.146433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.539 [2024-12-05 21:18:54.146454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.146475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.146498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.146519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.146541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.539 [2024-12-05 21:18:54.146561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.539 [2024-12-05 21:18:54.146583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.539 [2024-12-05 21:18:54.146604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.539 [2024-12-05 21:18:54.146625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.146646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.539 [2024-12-05 21:18:54.146667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:48.539 [2024-12-05 21:18:54.146680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.539 [2024-12-05 21:18:54.146688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.146701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.540 [2024-12-05 21:18:54.146709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.146722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.540 [2024-12-05 21:18:54.146730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.146743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.540 [2024-12-05 21:18:54.146752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.146766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.146773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.146786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.540 [2024-12-05 21:18:54.146794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.146808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.146815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.146829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.146837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.146850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.540 [2024-12-05 21:18:54.146857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.146871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.146879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.146893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.146900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.146913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.540 [2024-12-05 21:18:54.146921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.146935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.146943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.146956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.146964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.148607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.148625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.148641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.540 [2024-12-05 21:18:54.148652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.148666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.540 [2024-12-05 21:18:54.148674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.148687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.540 [2024-12-05 21:18:54.148695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.148708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.540 [2024-12-05 21:18:54.148716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.148729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.540 [2024-12-05 21:18:54.148737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.148750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.540 [2024-12-05 21:18:54.148757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.148771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.540 [2024-12-05 21:18:54.148778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.148791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.540 [2024-12-05 21:18:54.148799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.148812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.540 [2024-12-05 21:18:54.148820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.148833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.148840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.148854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.148861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.148875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.148882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.148895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.148903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.148918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.148925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.148939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.148947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.148960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.148967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.148981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.148988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.149001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.149009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.149022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.540 [2024-12-05 21:18:54.149029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.149043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.149050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.149064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.149071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.149084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.540 [2024-12-05 21:18:54.149092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.149105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.540 [2024-12-05 21:18:54.149113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.149126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.540 [2024-12-05 21:18:54.149133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:48.540 [2024-12-05 21:18:54.149147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.149154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.149169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.149176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.149190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.149197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.149211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.149218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.149231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.149239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.149252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.149260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.150251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.150275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.150296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.150318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.150339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.150360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.150386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.150411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.150432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.150453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.150474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.150495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.150515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.150537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.150558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.150579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.150600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.150620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.150642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.150664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.150685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.150706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.150727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.150748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.150769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.150783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.150790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.151156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.151169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.151184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.151192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.151205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.151213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.151226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.151234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.151247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.541 [2024-12-05 21:18:54.151255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.151268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.151276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.151291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.151299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.151313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.541 [2024-12-05 21:18:54.151320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:48.541 [2024-12-05 21:18:54.151334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.151341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.151355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.151362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.151381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.151389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.151402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.151409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.151423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.151430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.151444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.151451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.151465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.151473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.152920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.152938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.152953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.152961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.152975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.152982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.152999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.153007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.153028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.153049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.153070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.153091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.153112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.153132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.153153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.153174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.153195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.153215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.153236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.153261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.153282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.153303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.153324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.153345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.153366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.153392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.153413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.153434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.153455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.153476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.153497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.153519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.153540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.153561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.153582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.153603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.153624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.153645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.153667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.153680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.153688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.155304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.155326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.155346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.155374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.155394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.155412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.155432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.155451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.155470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.155489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.155508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.155527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.155547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.155566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.155585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.155604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.155625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.155644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.155663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.155683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.155702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.155721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.155740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.155752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.155759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.156763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.542 [2024-12-05 21:18:54.156779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:48.542 [2024-12-05 21:18:54.156793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.542 [2024-12-05 21:18:54.156801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.156813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.156820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.156833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.156840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.156855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.156862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.156874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.156881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.156894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.156900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.156913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.156920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.156932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.156939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.156951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.156958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.156970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.156977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.156989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.156996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.157016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.157035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.157054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.157073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.157094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.157113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.157132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.157151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.157170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.157190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.157208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.157228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.157246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.157266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.157285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.157304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.157325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.157344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.157363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.157387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.157406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.157425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.157444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.157457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.157463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.158187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.158201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.158215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.158222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.158235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.158242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.158254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.158261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.158273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.158280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.158295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.158302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.158315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.158322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.159941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.159957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.159972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.159979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.159991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.159998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.160025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.160045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.160063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.160082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.160101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.160121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.160140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.160162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.160181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.160200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.160219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.160239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.160258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.160277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.160296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.160315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.160334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.160353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.160379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.160400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.160419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.160438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.160457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.160476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.160496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.160515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.160534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.160553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.543 [2024-12-05 21:18:54.160572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.543 [2024-12-05 21:18:54.160591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:48.543 [2024-12-05 21:18:54.160603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.160610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.160622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.160632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.160644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.160651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.160663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.160670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.160682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.160689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.160701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.160708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.160720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.160727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.160740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.160747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.160759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.160766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.160778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.160786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.160798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.160805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.160816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.160824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.160836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.160843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.160855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.160862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.161867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.161883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.161897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.161904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.161917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.161924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.161936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.161943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.161956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.161963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.161975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.161981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.161994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.162000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.162019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.162038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.162057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.162076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.162096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.162118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.162137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.162427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.162447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.162467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.162486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.162505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.162525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.162544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.162563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.162582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.162601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.162623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.162642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.162661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.162680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.162699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.162718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.162737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.162756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.162768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.162775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.163676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.163691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.163705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.163712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.163725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.163732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.163745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.163755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.163767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.163774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.163786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.163793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.163805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.163812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.163824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.163831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.163843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.163850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.163862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.163869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.163881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.163888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.163901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.163908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.163920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.163927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.163939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.163946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.163958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.163965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.163978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.163985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.164001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.164008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.164020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.544 [2024-12-05 21:18:54.164027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.164039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.164046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.164058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.164065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.164077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.164084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.164096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.164103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.164115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.544 [2024-12-05 21:18:54.164122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:48.544 [2024-12-05 21:18:54.164134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.164141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.164153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.164160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.164172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.164179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.164191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.164198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.164211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.164218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.164232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.164239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.164251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.164258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.164270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.164277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.164290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.164297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.164309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.164316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.164328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.164335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.164347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.164354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.164366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.164378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.164390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.164397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.164409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.164415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.164427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.164434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.164446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.164453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.164465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.164474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.166004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.166021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.166036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.166043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.166055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.166063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.166075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.166082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.166094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.166101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.166113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.166120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.166133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.166139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.166152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.166158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.166171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.166177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.166189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.166196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.166209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.166215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.166228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.166238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.167041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.167072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.167092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.167112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.167131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.167150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.167169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.167188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.167208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.167226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.167246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.167265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.167287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.167306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.167325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.167344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.167364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.167389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.167408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.167428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.167447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.167466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.167485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.167504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.167525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.167544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.167563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.167582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.167601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.167620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.167639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.167658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.167671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.167678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.168235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.168248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.168262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.168269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.168282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.168288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.168301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.545 [2024-12-05 21:18:54.168310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.168323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.168330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.168342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.168349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.168361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.168373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.168386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.168393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:48.545 [2024-12-05 21:18:54.168406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.545 [2024-12-05 21:18:54.168413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:48.545 10649.93 IOPS, 41.60 MiB/s [2024-12-05T20:18:56.653Z] 10686.29 IOPS, 41.74 MiB/s [2024-12-05T20:18:56.653Z] Received shutdown signal, test time was about 28.929730 seconds 00:25:48.545 00:25:48.545 Latency(us) 00:25:48.545 [2024-12-05T20:18:56.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.545 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:48.545 Verification LBA range: start 0x0 length 0x4000 00:25:48.545 Nvme0n1 : 28.93 10706.74 41.82 0.00 0.00 11935.07 124.83 3019898.88 00:25:48.545 [2024-12-05T20:18:56.653Z] =================================================================================================================== 00:25:48.545 [2024-12-05T20:18:56.653Z] Total : 10706.74 41.82 0.00 0.00 11935.07 124.83 3019898.88 00:25:48.546 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:48.804 rmmod nvme_tcp 00:25:48.804 rmmod nvme_fabrics 00:25:48.804 rmmod nvme_keyring 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1419948 ']' 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1419948 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1419948 ']' 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1419948 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:48.804 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1419948 00:25:49.063 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:49.063 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:49.063 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1419948' 00:25:49.063 killing process with pid 1419948 00:25:49.063 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1419948 00:25:49.063 21:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1419948 00:25:49.063 21:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:49.063 21:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:49.063 21:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:49.063 21:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:49.063 21:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:49.063 21:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:49.063 21:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:49.063 21:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:49.063 21:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:49.063 21:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.063 21:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:49.063 21:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:51.594 00:25:51.594 real 0m40.834s 00:25:51.594 user 1m50.581s 00:25:51.594 sys 0m11.799s 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:51.594 ************************************ 00:25:51.594 END TEST nvmf_host_multipath_status 00:25:51.594 ************************************ 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.594 ************************************ 00:25:51.594 START TEST nvmf_discovery_remove_ifc 00:25:51.594 ************************************ 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:51.594 * Looking for test storage... 00:25:51.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:51.594 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:51.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.595 --rc genhtml_branch_coverage=1 00:25:51.595 --rc genhtml_function_coverage=1 00:25:51.595 --rc genhtml_legend=1 00:25:51.595 --rc geninfo_all_blocks=1 00:25:51.595 --rc geninfo_unexecuted_blocks=1 00:25:51.595 00:25:51.595 ' 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:51.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.595 --rc genhtml_branch_coverage=1 00:25:51.595 --rc genhtml_function_coverage=1 00:25:51.595 --rc genhtml_legend=1 00:25:51.595 --rc geninfo_all_blocks=1 00:25:51.595 --rc geninfo_unexecuted_blocks=1 00:25:51.595 00:25:51.595 ' 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:51.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.595 --rc genhtml_branch_coverage=1 00:25:51.595 --rc genhtml_function_coverage=1 00:25:51.595 --rc genhtml_legend=1 00:25:51.595 --rc geninfo_all_blocks=1 00:25:51.595 --rc geninfo_unexecuted_blocks=1 00:25:51.595 00:25:51.595 ' 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:51.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.595 --rc genhtml_branch_coverage=1 00:25:51.595 --rc genhtml_function_coverage=1 00:25:51.595 --rc genhtml_legend=1 00:25:51.595 --rc geninfo_all_blocks=1 00:25:51.595 --rc geninfo_unexecuted_blocks=1 00:25:51.595 00:25:51.595 ' 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:51.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:51.595 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:51.596 21:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:58.320 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:58.320 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:58.320 Found net devices under 0000:86:00.0: cvl_0_0 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:58.320 Found net devices under 0000:86:00.1: cvl_0_1 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.320 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:58.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:25:58.321 00:25:58.321 --- 10.0.0.2 ping statistics --- 00:25:58.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.321 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:25:58.321 00:25:58.321 --- 10.0.0.1 ping statistics --- 00:25:58.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.321 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1428958 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1428958 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1428958 ']' 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.321 [2024-12-05 21:19:05.469956] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:25:58.321 [2024-12-05 21:19:05.470008] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.321 [2024-12-05 21:19:05.549892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.321 [2024-12-05 21:19:05.591514] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.321 [2024-12-05 21:19:05.591549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.321 [2024-12-05 21:19:05.591556] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.321 [2024-12-05 21:19:05.591562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.321 [2024-12-05 21:19:05.591567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.321 [2024-12-05 21:19:05.592121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.321 [2024-12-05 21:19:05.740571] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.321 [2024-12-05 21:19:05.748732] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:58.321 null0 00:25:58.321 [2024-12-05 21:19:05.780730] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1428984 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1428984 /tmp/host.sock 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1428984 ']' 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:58.321 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.321 [2024-12-05 21:19:05.851275] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:25:58.321 [2024-12-05 21:19:05.851317] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1428984 ] 00:25:58.321 [2024-12-05 21:19:05.924721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.321 [2024-12-05 21:19:05.966854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.321 21:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.322 21:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.322 21:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:58.322 21:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.322 21:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.322 21:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.322 21:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:58.322 21:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.322 21:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.259 [2024-12-05 21:19:07.137525] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:59.259 [2024-12-05 21:19:07.137546] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:59.259 [2024-12-05 21:19:07.137559] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:59.259 [2024-12-05 21:19:07.225825] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:59.259 [2024-12-05 21:19:07.328568] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:59.259 [2024-12-05 21:19:07.329338] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x149c940:1 started. 00:25:59.259 [2024-12-05 21:19:07.330648] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:59.259 [2024-12-05 21:19:07.330687] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:59.259 [2024-12-05 21:19:07.330707] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:59.259 [2024-12-05 21:19:07.330720] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:59.259 [2024-12-05 21:19:07.330740] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:59.259 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.259 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:59.259 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:59.259 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:59.259 [2024-12-05 21:19:07.336084] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x149c940 was disconnected and freed. delete nvme_qpair. 00:25:59.259 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.259 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.259 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:59.259 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.259 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:59.259 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.518 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:59.518 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:59.518 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:59.518 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:59.518 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:59.518 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.518 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:59.518 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.518 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:59.518 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.518 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:59.518 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.518 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:59.518 21:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:00.454 21:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.454 21:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.455 21:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.455 21:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.455 21:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.455 21:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.455 21:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.455 21:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.714 21:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:00.714 21:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:01.650 21:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:01.650 21:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.650 21:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:01.650 21:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.650 21:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:01.650 21:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.650 21:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:01.650 21:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.650 21:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:01.650 21:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:02.586 21:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:02.586 21:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.586 21:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:02.586 21:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.586 21:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:02.586 21:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:02.586 21:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:02.586 21:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.586 21:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:02.586 21:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:03.963 21:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:03.963 21:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.963 21:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:03.963 21:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.963 21:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:03.963 21:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.964 21:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:03.964 21:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.964 21:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:03.964 21:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:04.899 21:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:04.899 21:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.899 21:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:04.899 21:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.899 21:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:04.899 21:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:04.899 21:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:04.899 21:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.899 [2024-12-05 21:19:12.772284] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:04.899 [2024-12-05 21:19:12.772324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.899 [2024-12-05 21:19:12.772335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.899 [2024-12-05 21:19:12.772344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.899 [2024-12-05 21:19:12.772352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.899 [2024-12-05 21:19:12.772359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.899 [2024-12-05 21:19:12.772371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.899 [2024-12-05 21:19:12.772379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.899 [2024-12-05 21:19:12.772385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.899 [2024-12-05 21:19:12.772394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.899 [2024-12-05 21:19:12.772400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.899 [2024-12-05 21:19:12.772407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479160 is same with the state(6) to be set 00:26:04.899 [2024-12-05 21:19:12.782305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1479160 (9): Bad file descriptor 00:26:04.899 21:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:04.899 21:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:04.899 [2024-12-05 21:19:12.792342] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:04.899 [2024-12-05 21:19:12.792355] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:04.899 [2024-12-05 21:19:12.792361] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:04.900 [2024-12-05 21:19:12.792369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:04.900 [2024-12-05 21:19:12.792389] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:05.835 21:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:05.835 21:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:05.835 21:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:05.835 21:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.835 21:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:05.835 21:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:05.835 21:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:05.835 [2024-12-05 21:19:13.842402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:05.835 [2024-12-05 21:19:13.842481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1479160 with addr=10.0.0.2, port=4420 00:26:05.835 [2024-12-05 21:19:13.842514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479160 is same with the state(6) to be set 00:26:05.835 [2024-12-05 21:19:13.842563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1479160 (9): Bad file descriptor 00:26:05.835 [2024-12-05 21:19:13.843517] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:05.835 [2024-12-05 21:19:13.843578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:05.836 [2024-12-05 21:19:13.843602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:05.836 [2024-12-05 21:19:13.843625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:05.836 [2024-12-05 21:19:13.843645] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:05.836 [2024-12-05 21:19:13.843661] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:05.836 [2024-12-05 21:19:13.843675] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:05.836 [2024-12-05 21:19:13.843697] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:05.836 [2024-12-05 21:19:13.843712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:05.836 21:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.836 21:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:05.836 21:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:06.773 [2024-12-05 21:19:14.846229] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:06.773 [2024-12-05 21:19:14.846249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:06.773 [2024-12-05 21:19:14.846263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:06.773 [2024-12-05 21:19:14.846270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:06.773 [2024-12-05 21:19:14.846277] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:06.773 [2024-12-05 21:19:14.846284] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:06.773 [2024-12-05 21:19:14.846288] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:06.773 [2024-12-05 21:19:14.846292] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:06.773 [2024-12-05 21:19:14.846312] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:06.773 [2024-12-05 21:19:14.846331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.773 [2024-12-05 21:19:14.846340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.773 [2024-12-05 21:19:14.846350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.773 [2024-12-05 21:19:14.846357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.773 [2024-12-05 21:19:14.846364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.773 [2024-12-05 21:19:14.846374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.773 [2024-12-05 21:19:14.846381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.773 [2024-12-05 21:19:14.846388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.773 [2024-12-05 21:19:14.846395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.773 [2024-12-05 21:19:14.846402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.773 [2024-12-05 21:19:14.846408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:06.773 [2024-12-05 21:19:14.846855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1468450 (9): Bad file descriptor 00:26:06.773 [2024-12-05 21:19:14.847865] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:06.773 [2024-12-05 21:19:14.847875] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:06.773 21:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:06.773 21:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.773 21:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:06.773 21:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.773 21:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:06.773 21:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.773 21:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:07.032 21:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.033 21:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:07.033 21:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.033 21:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.033 21:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:07.033 21:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:07.033 21:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.033 21:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:07.033 21:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.033 21:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:07.033 21:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.033 21:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:07.033 21:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.033 21:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:07.033 21:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:07.965 21:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:07.965 21:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.965 21:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:07.965 21:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.965 21:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:07.965 21:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.965 21:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:07.965 21:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.222 21:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:08.222 21:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:09.154 [2024-12-05 21:19:16.902519] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:09.154 [2024-12-05 21:19:16.902536] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:09.154 [2024-12-05 21:19:16.902551] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:09.154 [2024-12-05 21:19:16.990813] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:09.154 21:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:09.154 21:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:09.154 21:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:09.154 21:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.154 21:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:09.154 21:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.155 21:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:09.155 21:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.155 21:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:09.155 21:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:09.155 [2024-12-05 21:19:17.215889] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:09.155 [2024-12-05 21:19:17.216550] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x144d090:1 started. 00:26:09.155 [2024-12-05 21:19:17.217551] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:09.155 [2024-12-05 21:19:17.217580] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:09.155 [2024-12-05 21:19:17.217598] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:09.155 [2024-12-05 21:19:17.217610] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:09.155 [2024-12-05 21:19:17.217617] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:09.155 [2024-12-05 21:19:17.221280] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x144d090 was disconnected and freed. delete nvme_qpair. 00:26:10.102 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:10.102 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:10.102 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:10.102 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.102 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:10.102 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.102 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:10.102 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.102 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:10.102 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:10.102 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1428984 00:26:10.102 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1428984 ']' 00:26:10.102 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1428984 00:26:10.102 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:10.102 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.102 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1428984 00:26:10.360 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:10.360 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:10.360 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1428984' 00:26:10.360 killing process with pid 1428984 00:26:10.360 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1428984 00:26:10.360 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1428984 00:26:10.360 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:10.361 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:10.361 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:10.361 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:10.361 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:10.361 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:10.361 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:10.361 rmmod nvme_tcp 00:26:10.361 rmmod nvme_fabrics 00:26:10.361 rmmod nvme_keyring 00:26:10.361 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:10.361 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:10.361 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:10.361 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1428958 ']' 00:26:10.361 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1428958 00:26:10.361 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1428958 ']' 00:26:10.361 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1428958 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1428958 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1428958' 00:26:10.618 killing process with pid 1428958 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1428958 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1428958 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.618 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.619 21:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:13.151 00:26:13.151 real 0m21.507s 00:26:13.151 user 0m26.759s 00:26:13.151 sys 0m5.847s 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.151 ************************************ 00:26:13.151 END TEST nvmf_discovery_remove_ifc 00:26:13.151 ************************************ 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.151 ************************************ 00:26:13.151 START TEST nvmf_identify_kernel_target 00:26:13.151 ************************************ 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:13.151 * Looking for test storage... 00:26:13.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:13.151 21:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:13.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.151 --rc genhtml_branch_coverage=1 00:26:13.151 --rc genhtml_function_coverage=1 00:26:13.151 --rc genhtml_legend=1 00:26:13.151 --rc geninfo_all_blocks=1 00:26:13.151 --rc geninfo_unexecuted_blocks=1 00:26:13.151 00:26:13.151 ' 00:26:13.151 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:13.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.151 --rc genhtml_branch_coverage=1 00:26:13.151 --rc genhtml_function_coverage=1 00:26:13.151 --rc genhtml_legend=1 00:26:13.152 --rc geninfo_all_blocks=1 00:26:13.152 --rc geninfo_unexecuted_blocks=1 00:26:13.152 00:26:13.152 ' 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:13.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.152 --rc genhtml_branch_coverage=1 00:26:13.152 --rc genhtml_function_coverage=1 00:26:13.152 --rc genhtml_legend=1 00:26:13.152 --rc geninfo_all_blocks=1 00:26:13.152 --rc geninfo_unexecuted_blocks=1 00:26:13.152 00:26:13.152 ' 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:13.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.152 --rc genhtml_branch_coverage=1 00:26:13.152 --rc genhtml_function_coverage=1 00:26:13.152 --rc genhtml_legend=1 00:26:13.152 --rc geninfo_all_blocks=1 00:26:13.152 --rc geninfo_unexecuted_blocks=1 00:26:13.152 00:26:13.152 ' 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:13.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:13.152 21:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:19.717 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:19.717 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:19.717 Found net devices under 0000:86:00.0: cvl_0_0 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:19.717 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:19.718 Found net devices under 0000:86:00.1: cvl_0_1 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:19.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:26:19.718 00:26:19.718 --- 10.0.0.2 ping statistics --- 00:26:19.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.718 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:19.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:26:19.718 00:26:19.718 --- 10.0.0.1 ping statistics --- 00:26:19.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.718 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:19.718 21:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:19.718 21:19:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:19.718 21:19:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:21.628 Waiting for block devices as requested 00:26:21.888 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:21.888 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:21.888 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:22.147 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:22.147 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:22.147 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:22.407 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:22.407 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:22.407 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:22.666 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:22.666 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:22.666 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:22.666 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:22.925 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:22.925 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:22.925 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:23.184 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:23.184 No valid GPT data, bailing 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:23.184 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:23.444 00:26:23.444 Discovery Log Number of Records 2, Generation counter 2 00:26:23.444 =====Discovery Log Entry 0====== 00:26:23.444 trtype: tcp 00:26:23.444 adrfam: ipv4 00:26:23.444 subtype: current discovery subsystem 00:26:23.444 treq: not specified, sq flow control disable supported 00:26:23.444 portid: 1 00:26:23.444 trsvcid: 4420 00:26:23.444 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:23.444 traddr: 10.0.0.1 00:26:23.444 eflags: none 00:26:23.444 sectype: none 00:26:23.444 =====Discovery Log Entry 1====== 00:26:23.444 trtype: tcp 00:26:23.444 adrfam: ipv4 00:26:23.444 subtype: nvme subsystem 00:26:23.444 treq: not specified, sq flow control disable supported 00:26:23.444 portid: 1 00:26:23.444 trsvcid: 4420 00:26:23.444 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:23.444 traddr: 10.0.0.1 00:26:23.444 eflags: none 00:26:23.444 sectype: none 00:26:23.444 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:23.444 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:23.444 ===================================================== 00:26:23.444 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:23.444 ===================================================== 00:26:23.444 Controller Capabilities/Features 00:26:23.444 ================================ 00:26:23.444 Vendor ID: 0000 00:26:23.444 Subsystem Vendor ID: 0000 00:26:23.444 Serial Number: 33193df576f00e4b6f43 00:26:23.444 Model Number: Linux 00:26:23.444 Firmware Version: 6.8.9-20 00:26:23.444 Recommended Arb Burst: 0 00:26:23.444 IEEE OUI Identifier: 00 00 00 00:26:23.444 Multi-path I/O 00:26:23.444 May have multiple subsystem ports: No 00:26:23.444 May have multiple controllers: No 00:26:23.444 Associated with SR-IOV VF: No 00:26:23.444 Max Data Transfer Size: Unlimited 00:26:23.444 Max Number of Namespaces: 0 00:26:23.444 Max Number of I/O Queues: 1024 00:26:23.444 NVMe Specification Version (VS): 1.3 00:26:23.444 NVMe Specification Version (Identify): 1.3 00:26:23.444 Maximum Queue Entries: 1024 00:26:23.444 Contiguous Queues Required: No 00:26:23.444 Arbitration Mechanisms Supported 00:26:23.444 Weighted Round Robin: Not Supported 00:26:23.444 Vendor Specific: Not Supported 00:26:23.444 Reset Timeout: 7500 ms 00:26:23.444 Doorbell Stride: 4 bytes 00:26:23.444 NVM Subsystem Reset: Not Supported 00:26:23.444 Command Sets Supported 00:26:23.444 NVM Command Set: Supported 00:26:23.444 Boot Partition: Not Supported 00:26:23.444 Memory Page Size Minimum: 4096 bytes 00:26:23.444 Memory Page Size Maximum: 4096 bytes 00:26:23.444 Persistent Memory Region: Not Supported 00:26:23.444 Optional Asynchronous Events Supported 00:26:23.444 Namespace Attribute Notices: Not Supported 00:26:23.444 Firmware Activation Notices: Not Supported 00:26:23.444 ANA Change Notices: Not Supported 00:26:23.444 PLE Aggregate Log Change Notices: Not Supported 00:26:23.444 LBA Status Info Alert Notices: Not Supported 00:26:23.444 EGE Aggregate Log Change Notices: Not Supported 00:26:23.444 Normal NVM Subsystem Shutdown event: Not Supported 00:26:23.444 Zone Descriptor Change Notices: Not Supported 00:26:23.444 Discovery Log Change Notices: Supported 00:26:23.444 Controller Attributes 00:26:23.444 128-bit Host Identifier: Not Supported 00:26:23.444 Non-Operational Permissive Mode: Not Supported 00:26:23.444 NVM Sets: Not Supported 00:26:23.444 Read Recovery Levels: Not Supported 00:26:23.444 Endurance Groups: Not Supported 00:26:23.444 Predictable Latency Mode: Not Supported 00:26:23.444 Traffic Based Keep ALive: Not Supported 00:26:23.444 Namespace Granularity: Not Supported 00:26:23.444 SQ Associations: Not Supported 00:26:23.444 UUID List: Not Supported 00:26:23.444 Multi-Domain Subsystem: Not Supported 00:26:23.444 Fixed Capacity Management: Not Supported 00:26:23.444 Variable Capacity Management: Not Supported 00:26:23.444 Delete Endurance Group: Not Supported 00:26:23.444 Delete NVM Set: Not Supported 00:26:23.444 Extended LBA Formats Supported: Not Supported 00:26:23.444 Flexible Data Placement Supported: Not Supported 00:26:23.444 00:26:23.444 Controller Memory Buffer Support 00:26:23.444 ================================ 00:26:23.444 Supported: No 00:26:23.444 00:26:23.444 Persistent Memory Region Support 00:26:23.444 ================================ 00:26:23.444 Supported: No 00:26:23.444 00:26:23.444 Admin Command Set Attributes 00:26:23.444 ============================ 00:26:23.444 Security Send/Receive: Not Supported 00:26:23.444 Format NVM: Not Supported 00:26:23.444 Firmware Activate/Download: Not Supported 00:26:23.444 Namespace Management: Not Supported 00:26:23.444 Device Self-Test: Not Supported 00:26:23.444 Directives: Not Supported 00:26:23.444 NVMe-MI: Not Supported 00:26:23.444 Virtualization Management: Not Supported 00:26:23.444 Doorbell Buffer Config: Not Supported 00:26:23.444 Get LBA Status Capability: Not Supported 00:26:23.444 Command & Feature Lockdown Capability: Not Supported 00:26:23.444 Abort Command Limit: 1 00:26:23.444 Async Event Request Limit: 1 00:26:23.444 Number of Firmware Slots: N/A 00:26:23.444 Firmware Slot 1 Read-Only: N/A 00:26:23.444 Firmware Activation Without Reset: N/A 00:26:23.444 Multiple Update Detection Support: N/A 00:26:23.444 Firmware Update Granularity: No Information Provided 00:26:23.444 Per-Namespace SMART Log: No 00:26:23.444 Asymmetric Namespace Access Log Page: Not Supported 00:26:23.444 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:23.444 Command Effects Log Page: Not Supported 00:26:23.444 Get Log Page Extended Data: Supported 00:26:23.444 Telemetry Log Pages: Not Supported 00:26:23.444 Persistent Event Log Pages: Not Supported 00:26:23.444 Supported Log Pages Log Page: May Support 00:26:23.444 Commands Supported & Effects Log Page: Not Supported 00:26:23.444 Feature Identifiers & Effects Log Page:May Support 00:26:23.444 NVMe-MI Commands & Effects Log Page: May Support 00:26:23.444 Data Area 4 for Telemetry Log: Not Supported 00:26:23.444 Error Log Page Entries Supported: 1 00:26:23.444 Keep Alive: Not Supported 00:26:23.444 00:26:23.444 NVM Command Set Attributes 00:26:23.444 ========================== 00:26:23.445 Submission Queue Entry Size 00:26:23.445 Max: 1 00:26:23.445 Min: 1 00:26:23.445 Completion Queue Entry Size 00:26:23.445 Max: 1 00:26:23.445 Min: 1 00:26:23.445 Number of Namespaces: 0 00:26:23.445 Compare Command: Not Supported 00:26:23.445 Write Uncorrectable Command: Not Supported 00:26:23.445 Dataset Management Command: Not Supported 00:26:23.445 Write Zeroes Command: Not Supported 00:26:23.445 Set Features Save Field: Not Supported 00:26:23.445 Reservations: Not Supported 00:26:23.445 Timestamp: Not Supported 00:26:23.445 Copy: Not Supported 00:26:23.445 Volatile Write Cache: Not Present 00:26:23.445 Atomic Write Unit (Normal): 1 00:26:23.445 Atomic Write Unit (PFail): 1 00:26:23.445 Atomic Compare & Write Unit: 1 00:26:23.445 Fused Compare & Write: Not Supported 00:26:23.445 Scatter-Gather List 00:26:23.445 SGL Command Set: Supported 00:26:23.445 SGL Keyed: Not Supported 00:26:23.445 SGL Bit Bucket Descriptor: Not Supported 00:26:23.445 SGL Metadata Pointer: Not Supported 00:26:23.445 Oversized SGL: Not Supported 00:26:23.445 SGL Metadata Address: Not Supported 00:26:23.445 SGL Offset: Supported 00:26:23.445 Transport SGL Data Block: Not Supported 00:26:23.445 Replay Protected Memory Block: Not Supported 00:26:23.445 00:26:23.445 Firmware Slot Information 00:26:23.445 ========================= 00:26:23.445 Active slot: 0 00:26:23.445 00:26:23.445 00:26:23.445 Error Log 00:26:23.445 ========= 00:26:23.445 00:26:23.445 Active Namespaces 00:26:23.445 ================= 00:26:23.445 Discovery Log Page 00:26:23.445 ================== 00:26:23.445 Generation Counter: 2 00:26:23.445 Number of Records: 2 00:26:23.445 Record Format: 0 00:26:23.445 00:26:23.445 Discovery Log Entry 0 00:26:23.445 ---------------------- 00:26:23.445 Transport Type: 3 (TCP) 00:26:23.445 Address Family: 1 (IPv4) 00:26:23.445 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:23.445 Entry Flags: 00:26:23.445 Duplicate Returned Information: 0 00:26:23.445 Explicit Persistent Connection Support for Discovery: 0 00:26:23.445 Transport Requirements: 00:26:23.445 Secure Channel: Not Specified 00:26:23.445 Port ID: 1 (0x0001) 00:26:23.445 Controller ID: 65535 (0xffff) 00:26:23.445 Admin Max SQ Size: 32 00:26:23.445 Transport Service Identifier: 4420 00:26:23.445 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:23.445 Transport Address: 10.0.0.1 00:26:23.445 Discovery Log Entry 1 00:26:23.445 ---------------------- 00:26:23.445 Transport Type: 3 (TCP) 00:26:23.445 Address Family: 1 (IPv4) 00:26:23.445 Subsystem Type: 2 (NVM Subsystem) 00:26:23.445 Entry Flags: 00:26:23.445 Duplicate Returned Information: 0 00:26:23.445 Explicit Persistent Connection Support for Discovery: 0 00:26:23.445 Transport Requirements: 00:26:23.445 Secure Channel: Not Specified 00:26:23.445 Port ID: 1 (0x0001) 00:26:23.445 Controller ID: 65535 (0xffff) 00:26:23.445 Admin Max SQ Size: 32 00:26:23.445 Transport Service Identifier: 4420 00:26:23.445 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:23.445 Transport Address: 10.0.0.1 00:26:23.445 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:23.445 get_feature(0x01) failed 00:26:23.445 get_feature(0x02) failed 00:26:23.445 get_feature(0x04) failed 00:26:23.445 ===================================================== 00:26:23.445 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:23.445 ===================================================== 00:26:23.445 Controller Capabilities/Features 00:26:23.445 ================================ 00:26:23.445 Vendor ID: 0000 00:26:23.445 Subsystem Vendor ID: 0000 00:26:23.445 Serial Number: b4b1be350ae218a94122 00:26:23.445 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:23.445 Firmware Version: 6.8.9-20 00:26:23.445 Recommended Arb Burst: 6 00:26:23.445 IEEE OUI Identifier: 00 00 00 00:26:23.445 Multi-path I/O 00:26:23.445 May have multiple subsystem ports: Yes 00:26:23.445 May have multiple controllers: Yes 00:26:23.445 Associated with SR-IOV VF: No 00:26:23.445 Max Data Transfer Size: Unlimited 00:26:23.445 Max Number of Namespaces: 1024 00:26:23.445 Max Number of I/O Queues: 128 00:26:23.445 NVMe Specification Version (VS): 1.3 00:26:23.445 NVMe Specification Version (Identify): 1.3 00:26:23.445 Maximum Queue Entries: 1024 00:26:23.445 Contiguous Queues Required: No 00:26:23.445 Arbitration Mechanisms Supported 00:26:23.445 Weighted Round Robin: Not Supported 00:26:23.445 Vendor Specific: Not Supported 00:26:23.445 Reset Timeout: 7500 ms 00:26:23.445 Doorbell Stride: 4 bytes 00:26:23.445 NVM Subsystem Reset: Not Supported 00:26:23.445 Command Sets Supported 00:26:23.445 NVM Command Set: Supported 00:26:23.445 Boot Partition: Not Supported 00:26:23.445 Memory Page Size Minimum: 4096 bytes 00:26:23.445 Memory Page Size Maximum: 4096 bytes 00:26:23.445 Persistent Memory Region: Not Supported 00:26:23.445 Optional Asynchronous Events Supported 00:26:23.445 Namespace Attribute Notices: Supported 00:26:23.445 Firmware Activation Notices: Not Supported 00:26:23.445 ANA Change Notices: Supported 00:26:23.445 PLE Aggregate Log Change Notices: Not Supported 00:26:23.445 LBA Status Info Alert Notices: Not Supported 00:26:23.445 EGE Aggregate Log Change Notices: Not Supported 00:26:23.445 Normal NVM Subsystem Shutdown event: Not Supported 00:26:23.445 Zone Descriptor Change Notices: Not Supported 00:26:23.445 Discovery Log Change Notices: Not Supported 00:26:23.445 Controller Attributes 00:26:23.445 128-bit Host Identifier: Supported 00:26:23.445 Non-Operational Permissive Mode: Not Supported 00:26:23.445 NVM Sets: Not Supported 00:26:23.445 Read Recovery Levels: Not Supported 00:26:23.445 Endurance Groups: Not Supported 00:26:23.445 Predictable Latency Mode: Not Supported 00:26:23.445 Traffic Based Keep ALive: Supported 00:26:23.445 Namespace Granularity: Not Supported 00:26:23.445 SQ Associations: Not Supported 00:26:23.445 UUID List: Not Supported 00:26:23.445 Multi-Domain Subsystem: Not Supported 00:26:23.445 Fixed Capacity Management: Not Supported 00:26:23.445 Variable Capacity Management: Not Supported 00:26:23.445 Delete Endurance Group: Not Supported 00:26:23.445 Delete NVM Set: Not Supported 00:26:23.445 Extended LBA Formats Supported: Not Supported 00:26:23.445 Flexible Data Placement Supported: Not Supported 00:26:23.445 00:26:23.445 Controller Memory Buffer Support 00:26:23.445 ================================ 00:26:23.445 Supported: No 00:26:23.445 00:26:23.445 Persistent Memory Region Support 00:26:23.445 ================================ 00:26:23.445 Supported: No 00:26:23.445 00:26:23.445 Admin Command Set Attributes 00:26:23.445 ============================ 00:26:23.445 Security Send/Receive: Not Supported 00:26:23.445 Format NVM: Not Supported 00:26:23.445 Firmware Activate/Download: Not Supported 00:26:23.445 Namespace Management: Not Supported 00:26:23.445 Device Self-Test: Not Supported 00:26:23.445 Directives: Not Supported 00:26:23.445 NVMe-MI: Not Supported 00:26:23.445 Virtualization Management: Not Supported 00:26:23.445 Doorbell Buffer Config: Not Supported 00:26:23.445 Get LBA Status Capability: Not Supported 00:26:23.445 Command & Feature Lockdown Capability: Not Supported 00:26:23.445 Abort Command Limit: 4 00:26:23.445 Async Event Request Limit: 4 00:26:23.445 Number of Firmware Slots: N/A 00:26:23.445 Firmware Slot 1 Read-Only: N/A 00:26:23.445 Firmware Activation Without Reset: N/A 00:26:23.445 Multiple Update Detection Support: N/A 00:26:23.445 Firmware Update Granularity: No Information Provided 00:26:23.445 Per-Namespace SMART Log: Yes 00:26:23.445 Asymmetric Namespace Access Log Page: Supported 00:26:23.445 ANA Transition Time : 10 sec 00:26:23.445 00:26:23.445 Asymmetric Namespace Access Capabilities 00:26:23.445 ANA Optimized State : Supported 00:26:23.445 ANA Non-Optimized State : Supported 00:26:23.445 ANA Inaccessible State : Supported 00:26:23.445 ANA Persistent Loss State : Supported 00:26:23.445 ANA Change State : Supported 00:26:23.445 ANAGRPID is not changed : No 00:26:23.445 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:23.445 00:26:23.445 ANA Group Identifier Maximum : 128 00:26:23.445 Number of ANA Group Identifiers : 128 00:26:23.445 Max Number of Allowed Namespaces : 1024 00:26:23.445 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:23.445 Command Effects Log Page: Supported 00:26:23.445 Get Log Page Extended Data: Supported 00:26:23.445 Telemetry Log Pages: Not Supported 00:26:23.445 Persistent Event Log Pages: Not Supported 00:26:23.445 Supported Log Pages Log Page: May Support 00:26:23.445 Commands Supported & Effects Log Page: Not Supported 00:26:23.445 Feature Identifiers & Effects Log Page:May Support 00:26:23.446 NVMe-MI Commands & Effects Log Page: May Support 00:26:23.446 Data Area 4 for Telemetry Log: Not Supported 00:26:23.446 Error Log Page Entries Supported: 128 00:26:23.446 Keep Alive: Supported 00:26:23.446 Keep Alive Granularity: 1000 ms 00:26:23.446 00:26:23.446 NVM Command Set Attributes 00:26:23.446 ========================== 00:26:23.446 Submission Queue Entry Size 00:26:23.446 Max: 64 00:26:23.446 Min: 64 00:26:23.446 Completion Queue Entry Size 00:26:23.446 Max: 16 00:26:23.446 Min: 16 00:26:23.446 Number of Namespaces: 1024 00:26:23.446 Compare Command: Not Supported 00:26:23.446 Write Uncorrectable Command: Not Supported 00:26:23.446 Dataset Management Command: Supported 00:26:23.446 Write Zeroes Command: Supported 00:26:23.446 Set Features Save Field: Not Supported 00:26:23.446 Reservations: Not Supported 00:26:23.446 Timestamp: Not Supported 00:26:23.446 Copy: Not Supported 00:26:23.446 Volatile Write Cache: Present 00:26:23.446 Atomic Write Unit (Normal): 1 00:26:23.446 Atomic Write Unit (PFail): 1 00:26:23.446 Atomic Compare & Write Unit: 1 00:26:23.446 Fused Compare & Write: Not Supported 00:26:23.446 Scatter-Gather List 00:26:23.446 SGL Command Set: Supported 00:26:23.446 SGL Keyed: Not Supported 00:26:23.446 SGL Bit Bucket Descriptor: Not Supported 00:26:23.446 SGL Metadata Pointer: Not Supported 00:26:23.446 Oversized SGL: Not Supported 00:26:23.446 SGL Metadata Address: Not Supported 00:26:23.446 SGL Offset: Supported 00:26:23.446 Transport SGL Data Block: Not Supported 00:26:23.446 Replay Protected Memory Block: Not Supported 00:26:23.446 00:26:23.446 Firmware Slot Information 00:26:23.446 ========================= 00:26:23.446 Active slot: 0 00:26:23.446 00:26:23.446 Asymmetric Namespace Access 00:26:23.446 =========================== 00:26:23.446 Change Count : 0 00:26:23.446 Number of ANA Group Descriptors : 1 00:26:23.446 ANA Group Descriptor : 0 00:26:23.446 ANA Group ID : 1 00:26:23.446 Number of NSID Values : 1 00:26:23.446 Change Count : 0 00:26:23.446 ANA State : 1 00:26:23.446 Namespace Identifier : 1 00:26:23.446 00:26:23.446 Commands Supported and Effects 00:26:23.446 ============================== 00:26:23.446 Admin Commands 00:26:23.446 -------------- 00:26:23.446 Get Log Page (02h): Supported 00:26:23.446 Identify (06h): Supported 00:26:23.446 Abort (08h): Supported 00:26:23.446 Set Features (09h): Supported 00:26:23.446 Get Features (0Ah): Supported 00:26:23.446 Asynchronous Event Request (0Ch): Supported 00:26:23.446 Keep Alive (18h): Supported 00:26:23.446 I/O Commands 00:26:23.446 ------------ 00:26:23.446 Flush (00h): Supported 00:26:23.446 Write (01h): Supported LBA-Change 00:26:23.446 Read (02h): Supported 00:26:23.446 Write Zeroes (08h): Supported LBA-Change 00:26:23.446 Dataset Management (09h): Supported 00:26:23.446 00:26:23.446 Error Log 00:26:23.446 ========= 00:26:23.446 Entry: 0 00:26:23.446 Error Count: 0x3 00:26:23.446 Submission Queue Id: 0x0 00:26:23.446 Command Id: 0x5 00:26:23.446 Phase Bit: 0 00:26:23.446 Status Code: 0x2 00:26:23.446 Status Code Type: 0x0 00:26:23.446 Do Not Retry: 1 00:26:23.446 Error Location: 0x28 00:26:23.446 LBA: 0x0 00:26:23.446 Namespace: 0x0 00:26:23.446 Vendor Log Page: 0x0 00:26:23.446 ----------- 00:26:23.446 Entry: 1 00:26:23.446 Error Count: 0x2 00:26:23.446 Submission Queue Id: 0x0 00:26:23.446 Command Id: 0x5 00:26:23.446 Phase Bit: 0 00:26:23.446 Status Code: 0x2 00:26:23.446 Status Code Type: 0x0 00:26:23.446 Do Not Retry: 1 00:26:23.446 Error Location: 0x28 00:26:23.446 LBA: 0x0 00:26:23.446 Namespace: 0x0 00:26:23.446 Vendor Log Page: 0x0 00:26:23.446 ----------- 00:26:23.446 Entry: 2 00:26:23.446 Error Count: 0x1 00:26:23.446 Submission Queue Id: 0x0 00:26:23.446 Command Id: 0x4 00:26:23.446 Phase Bit: 0 00:26:23.446 Status Code: 0x2 00:26:23.446 Status Code Type: 0x0 00:26:23.446 Do Not Retry: 1 00:26:23.446 Error Location: 0x28 00:26:23.446 LBA: 0x0 00:26:23.446 Namespace: 0x0 00:26:23.446 Vendor Log Page: 0x0 00:26:23.446 00:26:23.446 Number of Queues 00:26:23.446 ================ 00:26:23.446 Number of I/O Submission Queues: 128 00:26:23.446 Number of I/O Completion Queues: 128 00:26:23.446 00:26:23.446 ZNS Specific Controller Data 00:26:23.446 ============================ 00:26:23.446 Zone Append Size Limit: 0 00:26:23.446 00:26:23.446 00:26:23.446 Active Namespaces 00:26:23.446 ================= 00:26:23.446 get_feature(0x05) failed 00:26:23.446 Namespace ID:1 00:26:23.446 Command Set Identifier: NVM (00h) 00:26:23.446 Deallocate: Supported 00:26:23.446 Deallocated/Unwritten Error: Not Supported 00:26:23.446 Deallocated Read Value: Unknown 00:26:23.446 Deallocate in Write Zeroes: Not Supported 00:26:23.446 Deallocated Guard Field: 0xFFFF 00:26:23.446 Flush: Supported 00:26:23.446 Reservation: Not Supported 00:26:23.446 Namespace Sharing Capabilities: Multiple Controllers 00:26:23.446 Size (in LBAs): 3125627568 (1490GiB) 00:26:23.446 Capacity (in LBAs): 3125627568 (1490GiB) 00:26:23.446 Utilization (in LBAs): 3125627568 (1490GiB) 00:26:23.446 UUID: e6f84e43-62bc-4556-b5b7-e7ef2ce156c7 00:26:23.446 Thin Provisioning: Not Supported 00:26:23.446 Per-NS Atomic Units: Yes 00:26:23.446 Atomic Boundary Size (Normal): 0 00:26:23.446 Atomic Boundary Size (PFail): 0 00:26:23.446 Atomic Boundary Offset: 0 00:26:23.446 NGUID/EUI64 Never Reused: No 00:26:23.446 ANA group ID: 1 00:26:23.446 Namespace Write Protected: No 00:26:23.446 Number of LBA Formats: 1 00:26:23.446 Current LBA Format: LBA Format #00 00:26:23.446 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:23.446 00:26:23.446 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:23.446 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:23.446 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:23.446 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:23.446 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:23.446 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:23.446 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:23.446 rmmod nvme_tcp 00:26:23.706 rmmod nvme_fabrics 00:26:23.706 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:23.706 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:23.706 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:23.706 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:23.706 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:23.706 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:23.706 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:23.706 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:23.706 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:23.706 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:23.706 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:23.706 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:23.706 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:23.706 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.706 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.706 21:19:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.613 21:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:25.613 21:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:25.613 21:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:25.613 21:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:25.613 21:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:25.613 21:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:25.613 21:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:25.613 21:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:25.613 21:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:25.613 21:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:25.873 21:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:28.405 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:28.665 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:28.665 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:28.665 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:28.665 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:28.665 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:28.665 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:28.665 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:28.665 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:28.665 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:28.665 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:28.665 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:28.665 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:28.665 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:28.665 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:28.665 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:30.045 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:30.045 00:26:30.045 real 0m17.284s 00:26:30.045 user 0m4.431s 00:26:30.045 sys 0m8.705s 00:26:30.045 21:19:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:30.045 21:19:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:30.045 ************************************ 00:26:30.045 END TEST nvmf_identify_kernel_target 00:26:30.045 ************************************ 00:26:30.045 21:19:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:30.045 21:19:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:30.046 21:19:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:30.046 21:19:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.306 ************************************ 00:26:30.306 START TEST nvmf_auth_host 00:26:30.306 ************************************ 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:30.306 * Looking for test storage... 00:26:30.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:30.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.306 --rc genhtml_branch_coverage=1 00:26:30.306 --rc genhtml_function_coverage=1 00:26:30.306 --rc genhtml_legend=1 00:26:30.306 --rc geninfo_all_blocks=1 00:26:30.306 --rc geninfo_unexecuted_blocks=1 00:26:30.306 00:26:30.306 ' 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:30.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.306 --rc genhtml_branch_coverage=1 00:26:30.306 --rc genhtml_function_coverage=1 00:26:30.306 --rc genhtml_legend=1 00:26:30.306 --rc geninfo_all_blocks=1 00:26:30.306 --rc geninfo_unexecuted_blocks=1 00:26:30.306 00:26:30.306 ' 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:30.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.306 --rc genhtml_branch_coverage=1 00:26:30.306 --rc genhtml_function_coverage=1 00:26:30.306 --rc genhtml_legend=1 00:26:30.306 --rc geninfo_all_blocks=1 00:26:30.306 --rc geninfo_unexecuted_blocks=1 00:26:30.306 00:26:30.306 ' 00:26:30.306 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:30.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.307 --rc genhtml_branch_coverage=1 00:26:30.307 --rc genhtml_function_coverage=1 00:26:30.307 --rc genhtml_legend=1 00:26:30.307 --rc geninfo_all_blocks=1 00:26:30.307 --rc geninfo_unexecuted_blocks=1 00:26:30.307 00:26:30.307 ' 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:30.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:30.307 21:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:36.880 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:36.880 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:36.880 Found net devices under 0000:86:00.0: cvl_0_0 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:36.880 Found net devices under 0000:86:00.1: cvl_0_1 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:36.880 21:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.880 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.880 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.880 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:36.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:26:36.881 00:26:36.881 --- 10.0.0.2 ping statistics --- 00:26:36.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.881 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:26:36.881 00:26:36.881 --- 10.0.0.1 ping statistics --- 00:26:36.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.881 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1441221 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1441221 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1441221 ']' 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a4a7b8aa4f6b401d88318629abedbc95 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ps5 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a4a7b8aa4f6b401d88318629abedbc95 0 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a4a7b8aa4f6b401d88318629abedbc95 0 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a4a7b8aa4f6b401d88318629abedbc95 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ps5 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ps5 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ps5 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=de9b5d8b7ab180e5692626e2daf1f2ac4557c3689becca7810e553328417c2a1 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.RJy 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key de9b5d8b7ab180e5692626e2daf1f2ac4557c3689becca7810e553328417c2a1 3 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 de9b5d8b7ab180e5692626e2daf1f2ac4557c3689becca7810e553328417c2a1 3 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=de9b5d8b7ab180e5692626e2daf1f2ac4557c3689becca7810e553328417c2a1 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.RJy 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.RJy 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.RJy 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=11920b0bc83acefc6054a183e05c50e1ca75051e63154043 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wPE 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 11920b0bc83acefc6054a183e05c50e1ca75051e63154043 0 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 11920b0bc83acefc6054a183e05c50e1ca75051e63154043 0 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=11920b0bc83acefc6054a183e05c50e1ca75051e63154043 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wPE 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wPE 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.wPE 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:36.881 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c6bec47c12ae7ad03c567016119410d33f4d0af4015a9cea 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Ufz 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c6bec47c12ae7ad03c567016119410d33f4d0af4015a9cea 2 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c6bec47c12ae7ad03c567016119410d33f4d0af4015a9cea 2 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c6bec47c12ae7ad03c567016119410d33f4d0af4015a9cea 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Ufz 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Ufz 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Ufz 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b07c923d05910325cc8777a65ea5bcc8 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.mVq 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b07c923d05910325cc8777a65ea5bcc8 1 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b07c923d05910325cc8777a65ea5bcc8 1 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b07c923d05910325cc8777a65ea5bcc8 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.mVq 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.mVq 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.mVq 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ba9b399c361e8b86d24081e935f3bcd2 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.qtI 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ba9b399c361e8b86d24081e935f3bcd2 1 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ba9b399c361e8b86d24081e935f3bcd2 1 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ba9b399c361e8b86d24081e935f3bcd2 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.qtI 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.qtI 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.qtI 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bfe5bbd98c47456f4d5d98777a65c74039ba8f99ba66e195 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Ts7 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bfe5bbd98c47456f4d5d98777a65c74039ba8f99ba66e195 2 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bfe5bbd98c47456f4d5d98777a65c74039ba8f99ba66e195 2 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bfe5bbd98c47456f4d5d98777a65c74039ba8f99ba66e195 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Ts7 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Ts7 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Ts7 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:36.882 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:37.141 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f0a43f4a625f8451647d6230ce8e1b62 00:26:37.141 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:37.141 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.CFT 00:26:37.141 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f0a43f4a625f8451647d6230ce8e1b62 0 00:26:37.141 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f0a43f4a625f8451647d6230ce8e1b62 0 00:26:37.141 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:37.141 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:37.141 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f0a43f4a625f8451647d6230ce8e1b62 00:26:37.141 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:37.141 21:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.CFT 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.CFT 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.CFT 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4873365533c5d27f5132346b60ec2cd33d8e8b274d21a306323aa28be8113ba3 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.q89 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4873365533c5d27f5132346b60ec2cd33d8e8b274d21a306323aa28be8113ba3 3 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4873365533c5d27f5132346b60ec2cd33d8e8b274d21a306323aa28be8113ba3 3 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4873365533c5d27f5132346b60ec2cd33d8e8b274d21a306323aa28be8113ba3 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.q89 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.q89 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.q89 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1441221 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1441221 ']' 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:37.141 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ps5 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.RJy ]] 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RJy 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.wPE 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Ufz ]] 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ufz 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.mVq 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.qtI ]] 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qtI 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Ts7 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.CFT ]] 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.CFT 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.q89 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:37.400 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:37.401 21:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:39.932 Waiting for block devices as requested 00:26:40.192 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:40.192 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:40.193 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:40.451 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:40.451 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:40.452 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:40.452 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:40.710 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:40.710 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:40.710 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:40.710 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:40.969 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:40.969 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:40.969 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:41.227 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:41.227 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:41.227 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:41.796 No valid GPT data, bailing 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:41.796 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:42.055 00:26:42.055 Discovery Log Number of Records 2, Generation counter 2 00:26:42.055 =====Discovery Log Entry 0====== 00:26:42.055 trtype: tcp 00:26:42.055 adrfam: ipv4 00:26:42.055 subtype: current discovery subsystem 00:26:42.055 treq: not specified, sq flow control disable supported 00:26:42.055 portid: 1 00:26:42.055 trsvcid: 4420 00:26:42.055 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:42.055 traddr: 10.0.0.1 00:26:42.055 eflags: none 00:26:42.055 sectype: none 00:26:42.055 =====Discovery Log Entry 1====== 00:26:42.055 trtype: tcp 00:26:42.055 adrfam: ipv4 00:26:42.055 subtype: nvme subsystem 00:26:42.055 treq: not specified, sq flow control disable supported 00:26:42.055 portid: 1 00:26:42.055 trsvcid: 4420 00:26:42.055 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:42.055 traddr: 10.0.0.1 00:26:42.055 eflags: none 00:26:42.055 sectype: none 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: ]] 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:42.055 21:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.055 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.315 nvme0n1 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: ]] 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.315 nvme0n1 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.315 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: ]] 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.575 nvme0n1 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.575 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: ]] 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.835 nvme0n1 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:42.835 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: ]] 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.836 21:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.095 nvme0n1 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:43.095 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.096 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.355 nvme0n1 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:43.355 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: ]] 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.356 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.615 nvme0n1 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: ]] 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.615 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.616 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.875 nvme0n1 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: ]] 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.875 21:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.134 nvme0n1 00:26:44.134 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.134 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.134 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.134 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.134 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.134 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.134 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.134 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.134 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.134 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.134 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.134 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.134 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:44.134 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.134 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: ]] 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.135 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.394 nvme0n1 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:44.394 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.395 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.395 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.395 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.395 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.395 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.395 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.395 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.395 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.395 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.395 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.395 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.395 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.395 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.395 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:44.395 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.395 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.654 nvme0n1 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: ]] 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.654 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.913 nvme0n1 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: ]] 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:44.913 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.914 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:44.914 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.914 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.914 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.914 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.914 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.914 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.914 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.914 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.914 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.914 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.914 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.914 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.914 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.914 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.914 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:44.914 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.914 21:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.172 nvme0n1 00:26:45.172 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.172 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.172 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.172 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.172 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.172 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.172 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.172 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.172 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.172 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: ]] 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.173 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.431 nvme0n1 00:26:45.431 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.431 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.431 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.431 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.431 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.431 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: ]] 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.690 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.691 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.691 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.691 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.691 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.691 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.691 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.691 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.691 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.691 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.691 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.691 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:45.691 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.691 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.949 nvme0n1 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:45.949 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.950 21:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.208 nvme0n1 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: ]] 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.208 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.773 nvme0n1 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: ]] 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:46.773 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.774 21:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.031 nvme0n1 00:26:47.031 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.031 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: ]] 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.032 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.288 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.288 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.288 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.288 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.288 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.289 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.289 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.289 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.289 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.289 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.289 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.289 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.289 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:47.289 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.289 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.546 nvme0n1 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: ]] 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.546 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.112 nvme0n1 00:26:48.112 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.112 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.112 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.112 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.112 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.112 21:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.112 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.113 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.113 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.113 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.113 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.113 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.113 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.113 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.113 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.113 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.113 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:48.113 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.113 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.370 nvme0n1 00:26:48.370 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.370 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.370 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.370 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.370 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.370 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.370 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.370 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.370 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.370 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.370 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.370 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:48.370 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.370 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:48.370 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.370 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:48.370 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:48.370 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: ]] 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.627 21:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.282 nvme0n1 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: ]] 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:49.282 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.283 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.852 nvme0n1 00:26:49.852 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.852 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.852 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.852 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.852 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.852 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.852 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.852 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.852 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.852 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.852 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.852 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.852 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: ]] 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.853 21:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.421 nvme0n1 00:26:50.421 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.421 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.421 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.421 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.421 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.421 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.421 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.421 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.421 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.421 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.421 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.421 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.421 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:50.421 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.421 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.421 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:50.421 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: ]] 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.422 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.990 nvme0n1 00:26:50.990 21:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.990 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.990 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.990 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.990 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.990 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.990 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.990 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.990 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.990 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.990 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.990 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.990 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:50.990 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.990 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.990 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:50.990 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.991 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.559 nvme0n1 00:26:51.559 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.559 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.559 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.559 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.559 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.559 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.559 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.559 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.559 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.559 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: ]] 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.818 nvme0n1 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: ]] 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:51.818 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.819 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.819 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.819 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.819 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.819 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.819 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.819 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.819 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.819 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.819 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.819 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.819 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.819 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.819 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:51.819 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.819 21:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.078 nvme0n1 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: ]] 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.078 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.338 nvme0n1 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: ]] 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.338 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.597 nvme0n1 00:26:52.597 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.597 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.597 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.597 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.597 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.597 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.597 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.598 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.857 nvme0n1 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:52.857 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: ]] 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.858 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.116 nvme0n1 00:26:53.116 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.116 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.116 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.116 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.116 21:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.116 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.116 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.116 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: ]] 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.117 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.375 nvme0n1 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:53.375 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: ]] 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.376 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.634 nvme0n1 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: ]] 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.634 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.634 nvme0n1 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.893 nvme0n1 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.893 21:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.152 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.152 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.152 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.152 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.152 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.152 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.152 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:54.152 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.152 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:54.152 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.152 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.152 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.152 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:54.152 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: ]] 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.153 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.412 nvme0n1 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: ]] 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.412 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.671 nvme0n1 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: ]] 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:54.671 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.672 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:54.672 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.672 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.672 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.672 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.672 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.672 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.672 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.672 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.672 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.672 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.672 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.672 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.672 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.672 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.672 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:54.672 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.672 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.931 nvme0n1 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: ]] 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.931 21:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:54.931 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.931 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.931 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.931 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.931 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.931 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.931 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.931 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.931 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.931 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.931 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.931 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.931 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.931 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.931 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:54.931 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.931 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.190 nvme0n1 00:26:55.190 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.190 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.190 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.190 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.190 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.190 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.449 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:55.450 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.450 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.450 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.450 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.450 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.450 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.450 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.450 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.450 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.450 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.450 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.450 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.450 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.450 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.450 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:55.450 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.450 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.709 nvme0n1 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: ]] 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.709 21:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.968 nvme0n1 00:26:55.968 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.968 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.968 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.968 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.968 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.968 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: ]] 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.228 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.488 nvme0n1 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: ]] 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.488 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.058 nvme0n1 00:26:57.058 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.058 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.058 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.058 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.058 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.058 21:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: ]] 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.058 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.316 nvme0n1 00:26:57.316 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.316 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.316 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.316 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.316 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.574 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.574 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.574 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.574 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.574 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.574 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.574 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.574 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:57.574 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.574 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:57.574 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:57.574 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.574 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:26:57.574 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.574 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.575 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.833 nvme0n1 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: ]] 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:57.833 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:57.834 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:57.834 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.834 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:57.834 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.834 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.834 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.834 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.834 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.834 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.834 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.834 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.834 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.834 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.834 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.834 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.834 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.834 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.092 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:58.092 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.092 21:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.660 nvme0n1 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: ]] 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.660 21:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.229 nvme0n1 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: ]] 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.229 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.798 nvme0n1 00:26:59.798 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.798 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: ]] 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.799 21:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.368 nvme0n1 00:27:00.368 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.368 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.368 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.368 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.368 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.368 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.368 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.368 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.368 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.368 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.628 21:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.197 nvme0n1 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: ]] 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.197 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.198 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.198 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.198 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.198 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.198 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.198 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.198 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.198 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.198 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:01.198 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.198 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.198 nvme0n1 00:27:01.198 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.198 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.198 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.198 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.198 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.198 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.457 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.457 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.457 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.457 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.457 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.457 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.457 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:01.457 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.457 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.457 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.457 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:01.457 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:27:01.457 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: ]] 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.458 nvme0n1 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.458 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.717 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.717 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.717 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: ]] 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.718 nvme0n1 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: ]] 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.718 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.978 nvme0n1 00:27:01.978 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.978 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.978 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.978 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.978 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.978 21:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.978 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.238 nvme0n1 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: ]] 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.238 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.497 nvme0n1 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:27:02.497 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: ]] 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.498 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.757 nvme0n1 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: ]] 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.757 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:02.758 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.758 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.017 nvme0n1 00:27:03.017 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.017 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.017 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.017 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.017 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.017 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.017 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.017 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.017 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.017 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.017 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.017 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.017 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:03.017 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.017 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.017 21:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: ]] 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.017 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.276 nvme0n1 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.276 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.535 nvme0n1 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: ]] 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.535 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.536 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.536 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.536 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.536 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.536 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.536 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.536 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.536 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.536 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:03.536 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.536 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.795 nvme0n1 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: ]] 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.795 21:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.054 nvme0n1 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: ]] 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.054 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.313 nvme0n1 00:27:04.313 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.313 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.313 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.313 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.313 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.313 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.573 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.573 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.573 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.573 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.573 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.573 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.573 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:04.573 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.573 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.573 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:04.573 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:04.573 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:27:04.573 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:27:04.573 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.573 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:04.573 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:27:04.573 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: ]] 00:27:04.573 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.574 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.833 nvme0n1 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.833 21:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.093 nvme0n1 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: ]] 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.093 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.662 nvme0n1 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: ]] 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:05.662 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.663 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:05.663 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.663 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.663 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.663 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.663 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.663 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.663 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.663 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.663 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.663 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.663 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.663 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.663 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.663 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.663 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:05.663 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.663 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.922 nvme0n1 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: ]] 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.922 21:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.922 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.922 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.922 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.922 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.922 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.922 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.922 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.922 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.922 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:05.922 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.922 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.490 nvme0n1 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: ]] 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.490 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.750 nvme0n1 00:27:06.750 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.750 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.750 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.750 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.750 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.750 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.010 21:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.268 nvme0n1 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhN2I4YWE0ZjZiNDAxZDg4MzE4NjI5YWJlZGJjOTV9qmuk: 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: ]] 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU5YjVkOGI3YWIxODBlNTY5MjYyNmUyZGFmMWYyYWM0NTU3YzM2ODliZWNjYTc4MTBlNTUzMzI4NDE3YzJhMQZQr1k=: 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.268 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.833 nvme0n1 00:27:07.833 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.833 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.833 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.833 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.833 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: ]] 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.091 21:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.091 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.091 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.091 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.091 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.091 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.091 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.091 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.091 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.091 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.091 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.091 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.091 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.091 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:08.091 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.091 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.659 nvme0n1 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: ]] 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.659 21:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.227 nvme0n1 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:09.227 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZlNWJiZDk4YzQ3NDU2ZjRkNWQ5ODc3N2E2NWM3NDAzOWJhOGY5OWJhNjZlMTk1uOxEEw==: 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: ]] 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBhNDNmNGE2MjVmODQ1MTY0N2Q2MjMwY2U4ZTFiNjLxUWMU: 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.228 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.796 nvme0n1 00:27:09.796 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.796 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.796 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.796 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.796 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDg3MzM2NTUzM2M1ZDI3ZjUxMzIzNDZiNjBlYzJjZDMzZDhlOGIyNzRkMjFhMzA2MzIzYWEyOGJlODExM2JhM1BE9A0=: 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.055 21:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.623 nvme0n1 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: ]] 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:27:10.623 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.624 request: 00:27:10.624 { 00:27:10.624 "name": "nvme0", 00:27:10.624 "trtype": "tcp", 00:27:10.624 "traddr": "10.0.0.1", 00:27:10.624 "adrfam": "ipv4", 00:27:10.624 "trsvcid": "4420", 00:27:10.624 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:10.624 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:10.624 "prchk_reftag": false, 00:27:10.624 "prchk_guard": false, 00:27:10.624 "hdgst": false, 00:27:10.624 "ddgst": false, 00:27:10.624 "allow_unrecognized_csi": false, 00:27:10.624 "method": "bdev_nvme_attach_controller", 00:27:10.624 "req_id": 1 00:27:10.624 } 00:27:10.624 Got JSON-RPC error response 00:27:10.624 response: 00:27:10.624 { 00:27:10.624 "code": -5, 00:27:10.624 "message": "Input/output error" 00:27:10.624 } 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.624 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.883 request: 00:27:10.883 { 00:27:10.883 "name": "nvme0", 00:27:10.883 "trtype": "tcp", 00:27:10.883 "traddr": "10.0.0.1", 00:27:10.883 "adrfam": "ipv4", 00:27:10.883 "trsvcid": "4420", 00:27:10.883 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:10.883 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:10.883 "prchk_reftag": false, 00:27:10.883 "prchk_guard": false, 00:27:10.883 "hdgst": false, 00:27:10.883 "ddgst": false, 00:27:10.883 "dhchap_key": "key2", 00:27:10.883 "allow_unrecognized_csi": false, 00:27:10.883 "method": "bdev_nvme_attach_controller", 00:27:10.883 "req_id": 1 00:27:10.883 } 00:27:10.883 Got JSON-RPC error response 00:27:10.883 response: 00:27:10.883 { 00:27:10.883 "code": -5, 00:27:10.883 "message": "Input/output error" 00:27:10.883 } 00:27:10.883 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:10.883 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:10.883 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:10.883 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:10.883 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:10.883 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.883 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:10.883 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.883 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.883 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.883 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:10.883 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:10.883 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.883 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.883 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.883 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.883 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.884 request: 00:27:10.884 { 00:27:10.884 "name": "nvme0", 00:27:10.884 "trtype": "tcp", 00:27:10.884 "traddr": "10.0.0.1", 00:27:10.884 "adrfam": "ipv4", 00:27:10.884 "trsvcid": "4420", 00:27:10.884 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:10.884 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:10.884 "prchk_reftag": false, 00:27:10.884 "prchk_guard": false, 00:27:10.884 "hdgst": false, 00:27:10.884 "ddgst": false, 00:27:10.884 "dhchap_key": "key1", 00:27:10.884 "dhchap_ctrlr_key": "ckey2", 00:27:10.884 "allow_unrecognized_csi": false, 00:27:10.884 "method": "bdev_nvme_attach_controller", 00:27:10.884 "req_id": 1 00:27:10.884 } 00:27:10.884 Got JSON-RPC error response 00:27:10.884 response: 00:27:10.884 { 00:27:10.884 "code": -5, 00:27:10.884 "message": "Input/output error" 00:27:10.884 } 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.884 21:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.143 nvme0n1 00:27:11.143 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.143 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:11.143 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.143 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.143 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.143 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:11.143 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: ]] 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.144 request: 00:27:11.144 { 00:27:11.144 "name": "nvme0", 00:27:11.144 "dhchap_key": "key1", 00:27:11.144 "dhchap_ctrlr_key": "ckey2", 00:27:11.144 "method": "bdev_nvme_set_keys", 00:27:11.144 "req_id": 1 00:27:11.144 } 00:27:11.144 Got JSON-RPC error response 00:27:11.144 response: 00:27:11.144 { 00:27:11.144 "code": -13, 00:27:11.144 "message": "Permission denied" 00:27:11.144 } 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:11.144 21:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:12.518 21:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.518 21:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:12.518 21:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.518 21:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.518 21:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.518 21:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:12.518 21:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE5MjBiMGJjODNhY2VmYzYwNTRhMTgzZTA1YzUwZTFjYTc1MDUxZTYzMTU0MDQz+EwWCQ==: 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: ]] 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZiZWM0N2MxMmFlN2FkMDNjNTY3MDE2MTE5NDEwZDMzZjRkMGFmNDAxNWE5Y2VhPXCNsQ==: 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.453 nvme0n1 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3YzkyM2QwNTkxMDMyNWNjODc3N2E2NWVhNWJjYzgiR4Q9: 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: ]] 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5YjM5OWMzNjFlOGI4NmQyNDA4MWU5MzVmM2JjZDIWJvqX: 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.453 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.711 request: 00:27:13.711 { 00:27:13.711 "name": "nvme0", 00:27:13.711 "dhchap_key": "key2", 00:27:13.711 "dhchap_ctrlr_key": "ckey1", 00:27:13.711 "method": "bdev_nvme_set_keys", 00:27:13.711 "req_id": 1 00:27:13.711 } 00:27:13.711 Got JSON-RPC error response 00:27:13.711 response: 00:27:13.711 { 00:27:13.711 "code": -13, 00:27:13.711 "message": "Permission denied" 00:27:13.711 } 00:27:13.711 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:13.711 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:13.711 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:13.711 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:13.711 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:13.711 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.711 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:13.711 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.711 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.711 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.711 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:13.711 21:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:14.645 rmmod nvme_tcp 00:27:14.645 rmmod nvme_fabrics 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1441221 ']' 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1441221 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1441221 ']' 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1441221 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:14.645 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1441221 00:27:14.903 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:14.903 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:14.903 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1441221' 00:27:14.903 killing process with pid 1441221 00:27:14.903 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1441221 00:27:14.903 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1441221 00:27:14.903 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:14.903 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:14.903 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:14.903 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:14.903 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:14.903 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:14.903 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:14.903 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:14.903 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:14.903 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.903 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.903 21:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.433 21:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:17.433 21:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:17.433 21:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:17.433 21:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:17.433 21:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:17.433 21:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:17.433 21:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:17.433 21:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:17.433 21:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:17.433 21:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:17.433 21:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:17.433 21:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:17.433 21:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:19.964 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:19.964 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:19.964 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:19.964 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:19.964 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:19.964 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:19.964 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:19.964 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:19.964 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:19.964 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:19.964 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:19.964 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:19.964 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:19.964 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:19.964 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:19.964 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:21.342 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:21.602 21:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ps5 /tmp/spdk.key-null.wPE /tmp/spdk.key-sha256.mVq /tmp/spdk.key-sha384.Ts7 /tmp/spdk.key-sha512.q89 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:21.602 21:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:24.136 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:24.136 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:24.136 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:24.136 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:24.136 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:24.136 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:24.136 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:24.136 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:24.136 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:24.136 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:24.136 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:24.136 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:24.136 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:24.136 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:24.136 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:24.136 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:24.136 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:24.395 00:27:24.395 real 0m54.202s 00:27:24.395 user 0m48.158s 00:27:24.395 sys 0m12.650s 00:27:24.395 21:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:24.395 21:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.395 ************************************ 00:27:24.395 END TEST nvmf_auth_host 00:27:24.395 ************************************ 00:27:24.395 21:20:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:24.395 21:20:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:24.395 21:20:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:24.395 21:20:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:24.395 21:20:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.395 ************************************ 00:27:24.395 START TEST nvmf_digest 00:27:24.395 ************************************ 00:27:24.396 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:24.656 * Looking for test storage... 00:27:24.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:24.656 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:24.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.656 --rc genhtml_branch_coverage=1 00:27:24.656 --rc genhtml_function_coverage=1 00:27:24.657 --rc genhtml_legend=1 00:27:24.657 --rc geninfo_all_blocks=1 00:27:24.657 --rc geninfo_unexecuted_blocks=1 00:27:24.657 00:27:24.657 ' 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:24.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.657 --rc genhtml_branch_coverage=1 00:27:24.657 --rc genhtml_function_coverage=1 00:27:24.657 --rc genhtml_legend=1 00:27:24.657 --rc geninfo_all_blocks=1 00:27:24.657 --rc geninfo_unexecuted_blocks=1 00:27:24.657 00:27:24.657 ' 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:24.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.657 --rc genhtml_branch_coverage=1 00:27:24.657 --rc genhtml_function_coverage=1 00:27:24.657 --rc genhtml_legend=1 00:27:24.657 --rc geninfo_all_blocks=1 00:27:24.657 --rc geninfo_unexecuted_blocks=1 00:27:24.657 00:27:24.657 ' 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:24.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.657 --rc genhtml_branch_coverage=1 00:27:24.657 --rc genhtml_function_coverage=1 00:27:24.657 --rc genhtml_legend=1 00:27:24.657 --rc geninfo_all_blocks=1 00:27:24.657 --rc geninfo_unexecuted_blocks=1 00:27:24.657 00:27:24.657 ' 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:24.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:24.657 21:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:31.231 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:31.231 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:31.231 Found net devices under 0000:86:00.0: cvl_0_0 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:31.231 Found net devices under 0000:86:00.1: cvl_0_1 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:31.231 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:31.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:31.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:27:31.232 00:27:31.232 --- 10.0.0.2 ping statistics --- 00:27:31.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.232 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:31.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:31.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:27:31.232 00:27:31.232 --- 10.0.0.1 ping statistics --- 00:27:31.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.232 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:31.232 ************************************ 00:27:31.232 START TEST nvmf_digest_clean 00:27:31.232 ************************************ 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1455000 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1455000 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1455000 ']' 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:31.232 21:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:31.232 [2024-12-05 21:20:38.685936] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:27:31.232 [2024-12-05 21:20:38.685978] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.232 [2024-12-05 21:20:38.768599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.232 [2024-12-05 21:20:38.807585] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.232 [2024-12-05 21:20:38.807619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.232 [2024-12-05 21:20:38.807634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.232 [2024-12-05 21:20:38.807640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.232 [2024-12-05 21:20:38.807645] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.232 [2024-12-05 21:20:38.808179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.491 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:31.491 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:31.491 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:31.491 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:31.491 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:31.491 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.491 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:31.491 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:31.491 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:31.491 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.491 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:31.750 null0 00:27:31.750 [2024-12-05 21:20:39.643419] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.750 [2024-12-05 21:20:39.667627] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.750 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.750 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:31.750 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:31.750 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:31.750 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:31.750 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:31.750 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:31.750 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:31.750 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1455084 00:27:31.750 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1455084 /var/tmp/bperf.sock 00:27:31.750 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:31.750 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1455084 ']' 00:27:31.750 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:31.750 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:31.750 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:31.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:31.750 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:31.750 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:31.750 [2024-12-05 21:20:39.723619] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:27:31.750 [2024-12-05 21:20:39.723665] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1455084 ] 00:27:31.750 [2024-12-05 21:20:39.781778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.750 [2024-12-05 21:20:39.824371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.009 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:32.009 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:32.009 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:32.009 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:32.009 21:20:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:32.269 21:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:32.269 21:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:32.529 nvme0n1 00:27:32.529 21:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:32.529 21:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:32.529 Running I/O for 2 seconds... 00:27:34.501 24809.00 IOPS, 96.91 MiB/s [2024-12-05T20:20:42.609Z] 24841.50 IOPS, 97.04 MiB/s 00:27:34.501 Latency(us) 00:27:34.501 [2024-12-05T20:20:42.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.501 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:34.501 nvme0n1 : 2.00 24860.06 97.11 0.00 0.00 5143.17 2543.42 11734.06 00:27:34.501 [2024-12-05T20:20:42.609Z] =================================================================================================================== 00:27:34.501 [2024-12-05T20:20:42.609Z] Total : 24860.06 97.11 0.00 0.00 5143.17 2543.42 11734.06 00:27:34.501 { 00:27:34.501 "results": [ 00:27:34.501 { 00:27:34.501 "job": "nvme0n1", 00:27:34.501 "core_mask": "0x2", 00:27:34.501 "workload": "randread", 00:27:34.501 "status": "finished", 00:27:34.501 "queue_depth": 128, 00:27:34.501 "io_size": 4096, 00:27:34.501 "runtime": 2.003656, 00:27:34.501 "iops": 24860.05581796476, 00:27:34.501 "mibps": 97.10959303892484, 00:27:34.501 "io_failed": 0, 00:27:34.501 "io_timeout": 0, 00:27:34.501 "avg_latency_us": 5143.171591358191, 00:27:34.501 "min_latency_us": 2543.4209523809523, 00:27:34.501 "max_latency_us": 11734.064761904761 00:27:34.501 } 00:27:34.501 ], 00:27:34.501 "core_count": 1 00:27:34.501 } 00:27:34.501 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:34.501 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:34.501 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:34.501 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:34.501 | select(.opcode=="crc32c") 00:27:34.501 | "\(.module_name) \(.executed)"' 00:27:34.501 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:34.760 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:34.760 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:34.760 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:34.760 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:34.760 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1455084 00:27:34.760 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1455084 ']' 00:27:34.760 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1455084 00:27:34.760 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:34.760 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:34.760 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1455084 00:27:34.760 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:34.760 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:34.760 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1455084' 00:27:34.760 killing process with pid 1455084 00:27:34.760 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1455084 00:27:34.760 Received shutdown signal, test time was about 2.000000 seconds 00:27:34.760 00:27:34.760 Latency(us) 00:27:34.760 [2024-12-05T20:20:42.868Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.760 [2024-12-05T20:20:42.868Z] =================================================================================================================== 00:27:34.760 [2024-12-05T20:20:42.868Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:34.760 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1455084 00:27:35.019 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:35.019 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:35.019 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:35.019 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:35.019 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:35.020 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:35.020 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:35.020 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1455728 00:27:35.020 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1455728 /var/tmp/bperf.sock 00:27:35.020 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:35.020 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1455728 ']' 00:27:35.020 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:35.020 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:35.020 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:35.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:35.020 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:35.020 21:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:35.020 [2024-12-05 21:20:42.989446] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:27:35.020 [2024-12-05 21:20:42.989494] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1455728 ] 00:27:35.020 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:35.020 Zero copy mechanism will not be used. 00:27:35.020 [2024-12-05 21:20:43.063017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.020 [2024-12-05 21:20:43.104793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.280 21:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:35.280 21:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:35.280 21:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:35.280 21:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:35.280 21:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:35.539 21:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:35.539 21:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:35.798 nvme0n1 00:27:35.798 21:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:35.798 21:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:35.799 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:35.799 Zero copy mechanism will not be used. 00:27:35.799 Running I/O for 2 seconds... 00:27:38.111 6065.00 IOPS, 758.12 MiB/s [2024-12-05T20:20:46.219Z] 6012.50 IOPS, 751.56 MiB/s 00:27:38.111 Latency(us) 00:27:38.111 [2024-12-05T20:20:46.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.111 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:38.111 nvme0n1 : 2.00 6011.33 751.42 0.00 0.00 2659.20 651.46 10236.10 00:27:38.111 [2024-12-05T20:20:46.219Z] =================================================================================================================== 00:27:38.111 [2024-12-05T20:20:46.219Z] Total : 6011.33 751.42 0.00 0.00 2659.20 651.46 10236.10 00:27:38.111 { 00:27:38.111 "results": [ 00:27:38.111 { 00:27:38.111 "job": "nvme0n1", 00:27:38.111 "core_mask": "0x2", 00:27:38.111 "workload": "randread", 00:27:38.111 "status": "finished", 00:27:38.111 "queue_depth": 16, 00:27:38.111 "io_size": 131072, 00:27:38.111 "runtime": 2.003052, 00:27:38.111 "iops": 6011.32671543225, 00:27:38.111 "mibps": 751.4158394290313, 00:27:38.111 "io_failed": 0, 00:27:38.112 "io_timeout": 0, 00:27:38.112 "avg_latency_us": 2659.1980181997224, 00:27:38.112 "min_latency_us": 651.4590476190476, 00:27:38.112 "max_latency_us": 10236.099047619047 00:27:38.112 } 00:27:38.112 ], 00:27:38.112 "core_count": 1 00:27:38.112 } 00:27:38.112 21:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:38.112 21:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:38.112 21:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:38.112 21:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:38.112 | select(.opcode=="crc32c") 00:27:38.112 | "\(.module_name) \(.executed)"' 00:27:38.112 21:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:38.112 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:38.112 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:38.112 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:38.112 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:38.112 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1455728 00:27:38.112 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1455728 ']' 00:27:38.112 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1455728 00:27:38.112 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:38.112 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:38.112 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1455728 00:27:38.112 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:38.112 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:38.112 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1455728' 00:27:38.112 killing process with pid 1455728 00:27:38.112 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1455728 00:27:38.112 Received shutdown signal, test time was about 2.000000 seconds 00:27:38.112 00:27:38.112 Latency(us) 00:27:38.112 [2024-12-05T20:20:46.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.112 [2024-12-05T20:20:46.220Z] =================================================================================================================== 00:27:38.112 [2024-12-05T20:20:46.220Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:38.112 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1455728 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1456203 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1456203 /var/tmp/bperf.sock 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1456203 ']' 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:38.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:38.370 [2024-12-05 21:20:46.276689] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:27:38.370 [2024-12-05 21:20:46.276739] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1456203 ] 00:27:38.370 [2024-12-05 21:20:46.351425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.370 [2024-12-05 21:20:46.388489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:38.370 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:38.627 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:38.627 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:38.885 nvme0n1 00:27:39.144 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:39.144 21:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:39.144 Running I/O for 2 seconds... 00:27:41.019 27798.00 IOPS, 108.59 MiB/s [2024-12-05T20:20:49.127Z] 27851.00 IOPS, 108.79 MiB/s 00:27:41.019 Latency(us) 00:27:41.019 [2024-12-05T20:20:49.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.019 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:41.019 nvme0n1 : 2.01 27851.12 108.79 0.00 0.00 4586.99 2371.78 6709.64 00:27:41.019 [2024-12-05T20:20:49.127Z] =================================================================================================================== 00:27:41.019 [2024-12-05T20:20:49.127Z] Total : 27851.12 108.79 0.00 0.00 4586.99 2371.78 6709.64 00:27:41.019 { 00:27:41.019 "results": [ 00:27:41.019 { 00:27:41.019 "job": "nvme0n1", 00:27:41.019 "core_mask": "0x2", 00:27:41.019 "workload": "randwrite", 00:27:41.019 "status": "finished", 00:27:41.019 "queue_depth": 128, 00:27:41.019 "io_size": 4096, 00:27:41.019 "runtime": 2.005736, 00:27:41.019 "iops": 27851.12297929538, 00:27:41.019 "mibps": 108.79344913787259, 00:27:41.019 "io_failed": 0, 00:27:41.019 "io_timeout": 0, 00:27:41.019 "avg_latency_us": 4586.993467098343, 00:27:41.019 "min_latency_us": 2371.7790476190476, 00:27:41.019 "max_latency_us": 6709.638095238095 00:27:41.019 } 00:27:41.019 ], 00:27:41.019 "core_count": 1 00:27:41.019 } 00:27:41.019 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:41.278 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:41.278 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:41.278 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:41.278 | select(.opcode=="crc32c") 00:27:41.278 | "\(.module_name) \(.executed)"' 00:27:41.278 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:41.278 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:41.278 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:41.278 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:41.278 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:41.278 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1456203 00:27:41.278 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1456203 ']' 00:27:41.278 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1456203 00:27:41.278 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:41.278 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:41.278 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1456203 00:27:41.537 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:41.537 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:41.537 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1456203' 00:27:41.537 killing process with pid 1456203 00:27:41.537 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1456203 00:27:41.537 Received shutdown signal, test time was about 2.000000 seconds 00:27:41.537 00:27:41.537 Latency(us) 00:27:41.537 [2024-12-05T20:20:49.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.537 [2024-12-05T20:20:49.645Z] =================================================================================================================== 00:27:41.537 [2024-12-05T20:20:49.645Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:41.537 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1456203 00:27:41.537 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:41.537 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:41.537 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:41.537 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:41.537 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:41.537 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:41.537 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:41.537 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1456674 00:27:41.537 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1456674 /var/tmp/bperf.sock 00:27:41.537 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:41.538 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1456674 ']' 00:27:41.538 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:41.538 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.538 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:41.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:41.538 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.538 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:41.538 [2024-12-05 21:20:49.595028] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:27:41.538 [2024-12-05 21:20:49.595075] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1456674 ] 00:27:41.538 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:41.538 Zero copy mechanism will not be used. 00:27:41.797 [2024-12-05 21:20:49.668146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.797 [2024-12-05 21:20:49.708044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.797 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.797 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:41.797 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:41.797 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:41.797 21:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:42.056 21:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:42.056 21:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:42.314 nvme0n1 00:27:42.314 21:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:42.314 21:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:42.314 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:42.314 Zero copy mechanism will not be used. 00:27:42.314 Running I/O for 2 seconds... 00:27:44.627 6367.00 IOPS, 795.88 MiB/s [2024-12-05T20:20:52.735Z] 6691.50 IOPS, 836.44 MiB/s 00:27:44.627 Latency(us) 00:27:44.627 [2024-12-05T20:20:52.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.627 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:44.627 nvme0n1 : 2.00 6688.62 836.08 0.00 0.00 2387.89 1661.81 5180.46 00:27:44.627 [2024-12-05T20:20:52.735Z] =================================================================================================================== 00:27:44.627 [2024-12-05T20:20:52.735Z] Total : 6688.62 836.08 0.00 0.00 2387.89 1661.81 5180.46 00:27:44.627 { 00:27:44.627 "results": [ 00:27:44.627 { 00:27:44.627 "job": "nvme0n1", 00:27:44.627 "core_mask": "0x2", 00:27:44.627 "workload": "randwrite", 00:27:44.627 "status": "finished", 00:27:44.627 "queue_depth": 16, 00:27:44.627 "io_size": 131072, 00:27:44.627 "runtime": 2.003103, 00:27:44.627 "iops": 6688.622602032946, 00:27:44.627 "mibps": 836.0778252541182, 00:27:44.627 "io_failed": 0, 00:27:44.627 "io_timeout": 0, 00:27:44.627 "avg_latency_us": 2387.8874362200468, 00:27:44.627 "min_latency_us": 1661.8057142857142, 00:27:44.627 "max_latency_us": 5180.464761904762 00:27:44.627 } 00:27:44.627 ], 00:27:44.627 "core_count": 1 00:27:44.627 } 00:27:44.627 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:44.627 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:44.627 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:44.627 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:44.627 | select(.opcode=="crc32c") 00:27:44.627 | "\(.module_name) \(.executed)"' 00:27:44.627 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:44.627 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:44.627 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:44.627 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:44.627 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:44.627 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1456674 00:27:44.627 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1456674 ']' 00:27:44.627 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1456674 00:27:44.627 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:44.627 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:44.628 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1456674 00:27:44.628 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:44.628 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:44.628 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1456674' 00:27:44.628 killing process with pid 1456674 00:27:44.628 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1456674 00:27:44.628 Received shutdown signal, test time was about 2.000000 seconds 00:27:44.628 00:27:44.628 Latency(us) 00:27:44.628 [2024-12-05T20:20:52.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.628 [2024-12-05T20:20:52.736Z] =================================================================================================================== 00:27:44.628 [2024-12-05T20:20:52.736Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:44.628 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1456674 00:27:44.887 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1455000 00:27:44.887 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1455000 ']' 00:27:44.887 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1455000 00:27:44.887 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:44.887 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:44.887 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1455000 00:27:44.887 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:44.887 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:44.887 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1455000' 00:27:44.887 killing process with pid 1455000 00:27:44.887 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1455000 00:27:44.887 21:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1455000 00:27:45.146 00:27:45.146 real 0m14.452s 00:27:45.146 user 0m27.000s 00:27:45.146 sys 0m4.724s 00:27:45.146 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:45.146 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:45.146 ************************************ 00:27:45.146 END TEST nvmf_digest_clean 00:27:45.146 ************************************ 00:27:45.146 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:45.146 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:45.146 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:45.147 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:45.147 ************************************ 00:27:45.147 START TEST nvmf_digest_error 00:27:45.147 ************************************ 00:27:45.147 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:45.147 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:45.147 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:45.147 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:45.147 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.147 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1457393 00:27:45.147 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1457393 00:27:45.147 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:45.147 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1457393 ']' 00:27:45.147 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.147 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:45.147 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.147 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:45.147 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.147 [2024-12-05 21:20:53.206194] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:27:45.147 [2024-12-05 21:20:53.206233] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.406 [2024-12-05 21:20:53.281160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.406 [2024-12-05 21:20:53.321064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.406 [2024-12-05 21:20:53.321100] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.406 [2024-12-05 21:20:53.321107] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.406 [2024-12-05 21:20:53.321113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.406 [2024-12-05 21:20:53.321121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.406 [2024-12-05 21:20:53.321645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.406 [2024-12-05 21:20:53.390087] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.406 null0 00:27:45.406 [2024-12-05 21:20:53.480944] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:45.406 [2024-12-05 21:20:53.505140] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1457412 00:27:45.406 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1457412 /var/tmp/bperf.sock 00:27:45.665 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:45.665 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1457412 ']' 00:27:45.665 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:45.665 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:45.665 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:45.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:45.665 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:45.665 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.665 [2024-12-05 21:20:53.559045] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:27:45.665 [2024-12-05 21:20:53.559087] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1457412 ] 00:27:45.665 [2024-12-05 21:20:53.632699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.665 [2024-12-05 21:20:53.672739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.923 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:45.923 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:45.923 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:45.923 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:45.923 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:45.923 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.923 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.923 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.923 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:45.923 21:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:46.489 nvme0n1 00:27:46.489 21:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:46.490 21:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.490 21:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:46.490 21:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.490 21:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:46.490 21:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:46.490 Running I/O for 2 seconds... 00:27:46.490 [2024-12-05 21:20:54.426182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.490 [2024-12-05 21:20:54.426212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.490 [2024-12-05 21:20:54.426222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.490 [2024-12-05 21:20:54.438160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.490 [2024-12-05 21:20:54.438184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.490 [2024-12-05 21:20:54.438194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.490 [2024-12-05 21:20:54.447885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.490 [2024-12-05 21:20:54.447906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.490 [2024-12-05 21:20:54.447919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.490 [2024-12-05 21:20:54.460122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.490 [2024-12-05 21:20:54.460143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.490 [2024-12-05 21:20:54.460152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.490 [2024-12-05 21:20:54.467914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.490 [2024-12-05 21:20:54.467933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.490 [2024-12-05 21:20:54.467942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.490 [2024-12-05 21:20:54.479844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.490 [2024-12-05 21:20:54.479865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.490 [2024-12-05 21:20:54.479873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.490 [2024-12-05 21:20:54.489198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.490 [2024-12-05 21:20:54.489218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.490 [2024-12-05 21:20:54.489226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.490 [2024-12-05 21:20:54.499209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.490 [2024-12-05 21:20:54.499229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.490 [2024-12-05 21:20:54.499237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.490 [2024-12-05 21:20:54.509126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.490 [2024-12-05 21:20:54.509145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.490 [2024-12-05 21:20:54.509153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.490 [2024-12-05 21:20:54.516803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.490 [2024-12-05 21:20:54.516823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.490 [2024-12-05 21:20:54.516831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.490 [2024-12-05 21:20:54.526423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.490 [2024-12-05 21:20:54.526442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.490 [2024-12-05 21:20:54.526450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.490 [2024-12-05 21:20:54.537001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.490 [2024-12-05 21:20:54.537024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.490 [2024-12-05 21:20:54.537032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.490 [2024-12-05 21:20:54.549741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.490 [2024-12-05 21:20:54.549760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.490 [2024-12-05 21:20:54.549768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.490 [2024-12-05 21:20:54.560869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.490 [2024-12-05 21:20:54.560889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.490 [2024-12-05 21:20:54.560897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.490 [2024-12-05 21:20:54.574391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.490 [2024-12-05 21:20:54.574411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.490 [2024-12-05 21:20:54.574421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.490 [2024-12-05 21:20:54.582412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.490 [2024-12-05 21:20:54.582431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.490 [2024-12-05 21:20:54.582438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.490 [2024-12-05 21:20:54.594138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.490 [2024-12-05 21:20:54.594158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.490 [2024-12-05 21:20:54.594167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.749 [2024-12-05 21:20:54.607015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.749 [2024-12-05 21:20:54.607035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.749 [2024-12-05 21:20:54.607043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.749 [2024-12-05 21:20:54.617134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.749 [2024-12-05 21:20:54.617153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.749 [2024-12-05 21:20:54.617162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.749 [2024-12-05 21:20:54.625868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.749 [2024-12-05 21:20:54.625888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.749 [2024-12-05 21:20:54.625896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.749 [2024-12-05 21:20:54.636910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.749 [2024-12-05 21:20:54.636932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.636940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.649141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.649162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.649170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.657337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.657357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.657365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.669152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.669173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.669181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.677565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.677584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.677592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.688282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.688302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.688310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.698506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.698526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.698534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.708237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.708257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.708266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.717923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.717946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.717954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.726650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.726669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.726678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.738592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.738611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.738619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.746995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.747014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.747022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.758618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.758638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.758646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.770225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.770244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.770253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.781573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.781593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.781601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.790014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.790036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.790045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.799260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.799280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.799288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.809596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.809616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.809624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.820378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.820397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.820405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.828909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.828928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.828936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.837773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.837792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.837800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.750 [2024-12-05 21:20:54.846640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:46.750 [2024-12-05 21:20:54.846659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-12-05 21:20:54.846667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.009 [2024-12-05 21:20:54.856717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.009 [2024-12-05 21:20:54.856737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.009 [2024-12-05 21:20:54.856745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.009 [2024-12-05 21:20:54.865060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.009 [2024-12-05 21:20:54.865079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.009 [2024-12-05 21:20:54.865087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.009 [2024-12-05 21:20:54.874045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.009 [2024-12-05 21:20:54.874064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.009 [2024-12-05 21:20:54.874072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.009 [2024-12-05 21:20:54.883326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.009 [2024-12-05 21:20:54.883345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.009 [2024-12-05 21:20:54.883356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.009 [2024-12-05 21:20:54.893214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.009 [2024-12-05 21:20:54.893233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.009 [2024-12-05 21:20:54.893241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.009 [2024-12-05 21:20:54.902522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.009 [2024-12-05 21:20:54.902541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.009 [2024-12-05 21:20:54.902550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.009 [2024-12-05 21:20:54.911320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.009 [2024-12-05 21:20:54.911339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.009 [2024-12-05 21:20:54.911347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.009 [2024-12-05 21:20:54.920698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.009 [2024-12-05 21:20:54.920718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.009 [2024-12-05 21:20:54.920726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.009 [2024-12-05 21:20:54.930382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.009 [2024-12-05 21:20:54.930402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.009 [2024-12-05 21:20:54.930410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.009 [2024-12-05 21:20:54.939178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.009 [2024-12-05 21:20:54.939197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.009 [2024-12-05 21:20:54.939206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.009 [2024-12-05 21:20:54.949477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.009 [2024-12-05 21:20:54.949495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.010 [2024-12-05 21:20:54.949503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.010 [2024-12-05 21:20:54.958667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.010 [2024-12-05 21:20:54.958687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.010 [2024-12-05 21:20:54.958695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.010 [2024-12-05 21:20:54.967560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.010 [2024-12-05 21:20:54.967583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.010 [2024-12-05 21:20:54.967591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.010 [2024-12-05 21:20:54.977937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.010 [2024-12-05 21:20:54.977955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.010 [2024-12-05 21:20:54.977964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.010 [2024-12-05 21:20:54.989254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.010 [2024-12-05 21:20:54.989274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.010 [2024-12-05 21:20:54.989283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.010 [2024-12-05 21:20:54.997151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.010 [2024-12-05 21:20:54.997170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.010 [2024-12-05 21:20:54.997178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.010 [2024-12-05 21:20:55.007906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.010 [2024-12-05 21:20:55.007925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.010 [2024-12-05 21:20:55.007934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.010 [2024-12-05 21:20:55.016160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.010 [2024-12-05 21:20:55.016180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.010 [2024-12-05 21:20:55.016189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.010 [2024-12-05 21:20:55.027666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.010 [2024-12-05 21:20:55.027685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.010 [2024-12-05 21:20:55.027693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.010 [2024-12-05 21:20:55.037869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.010 [2024-12-05 21:20:55.037888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.010 [2024-12-05 21:20:55.037896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.010 [2024-12-05 21:20:55.046784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.010 [2024-12-05 21:20:55.046803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.010 [2024-12-05 21:20:55.046811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.010 [2024-12-05 21:20:55.055315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.010 [2024-12-05 21:20:55.055334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.010 [2024-12-05 21:20:55.055342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.010 [2024-12-05 21:20:55.064776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.010 [2024-12-05 21:20:55.064796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.010 [2024-12-05 21:20:55.064804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.010 [2024-12-05 21:20:55.075320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.010 [2024-12-05 21:20:55.075339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.010 [2024-12-05 21:20:55.075347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.010 [2024-12-05 21:20:55.083662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.010 [2024-12-05 21:20:55.083682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.010 [2024-12-05 21:20:55.083690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.010 [2024-12-05 21:20:55.094289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.010 [2024-12-05 21:20:55.094308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.010 [2024-12-05 21:20:55.094316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.010 [2024-12-05 21:20:55.103870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.010 [2024-12-05 21:20:55.103891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.010 [2024-12-05 21:20:55.103899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.010 [2024-12-05 21:20:55.111907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.010 [2024-12-05 21:20:55.111927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.010 [2024-12-05 21:20:55.111935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.122373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.122395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.122403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.133202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.133222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.133234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.142804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.142823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.142832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.151124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.151144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.151151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.163283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.163302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.163310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.171587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.171606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.171614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.181830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.181849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.181857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.192177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.192197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.192205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.200639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.200658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.200665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.210564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.210583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.210591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.219135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.219155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.219163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.227950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.227968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.227976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.236787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.236805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.236813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.246438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.246457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.246465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.255807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.255826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.255834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.265637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.265657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.265665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.274128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.274146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.274154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.283439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.283459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.283467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.292794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.292813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.292824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.302235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.302254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.302262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.312052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.312071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.312079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.319689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.319708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.319716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.329227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.329246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.329254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.339324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.339346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.339354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.348008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.348027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.348035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.356900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.356920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.356928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.269 [2024-12-05 21:20:55.365745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.269 [2024-12-05 21:20:55.365767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.269 [2024-12-05 21:20:55.365775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.527 [2024-12-05 21:20:55.375808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.527 [2024-12-05 21:20:55.375833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.527 [2024-12-05 21:20:55.375844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.527 [2024-12-05 21:20:55.385282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.527 [2024-12-05 21:20:55.385304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.527 [2024-12-05 21:20:55.385312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.527 [2024-12-05 21:20:55.394314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.527 [2024-12-05 21:20:55.394336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.527 [2024-12-05 21:20:55.394345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.527 [2024-12-05 21:20:55.404297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.527 [2024-12-05 21:20:55.404317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.527 [2024-12-05 21:20:55.404324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.527 [2024-12-05 21:20:55.412898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.527 [2024-12-05 21:20:55.412919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.412927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 26029.00 IOPS, 101.68 MiB/s [2024-12-05T20:20:55.636Z] [2024-12-05 21:20:55.423057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.423078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.423085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.433168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.433189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.433197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.441844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.441863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.441871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.451340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.451360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.451375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.460127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.460147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.460155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.469387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.469407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.469415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.478460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.478480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.478488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.488495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.488516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.488524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.498537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.498558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.498567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.506527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.506547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.506554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.515696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.515715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.515723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.525544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.525565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.525572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.537245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.537268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.537276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.546083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.546102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.546110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.555489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.555509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.555517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.566715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.566734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.566743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.577381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.577403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.577411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.585179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.585198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.585206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.596124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.596144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.596153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.606614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.606634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.606641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.615327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.615347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.615355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.528 [2024-12-05 21:20:55.625244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.528 [2024-12-05 21:20:55.625265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.528 [2024-12-05 21:20:55.625273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.787 [2024-12-05 21:20:55.636406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.787 [2024-12-05 21:20:55.636429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.787 [2024-12-05 21:20:55.636438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.787 [2024-12-05 21:20:55.645356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.645383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.645391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.656244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.656265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.656273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.666914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.666934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.666942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.674803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.674823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.674831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.686245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.686264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.686272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.694498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.694517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.694525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.706559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.706580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.706591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.716025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.716045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.716053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.724726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.724746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.724754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.734887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.734906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.734914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.744955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.744976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.744984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.752940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.752960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.752968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.764880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.764900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.764908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.776193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.776212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.776221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.784919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.784938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.784945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.797894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.797918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.797926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.806285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.806304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.806312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.816626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.816645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.816653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.826185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.826204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.826212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.835539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.835558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.835566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.844074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.844094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.844101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.853199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.853219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.853227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.862113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.862132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.862139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.871759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.871779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.871786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.882127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.882146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.882155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.788 [2024-12-05 21:20:55.891894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:47.788 [2024-12-05 21:20:55.891914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.788 [2024-12-05 21:20:55.891922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.047 [2024-12-05 21:20:55.900034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.047 [2024-12-05 21:20:55.900054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.047 [2024-12-05 21:20:55.900062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.047 [2024-12-05 21:20:55.911488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.047 [2024-12-05 21:20:55.911508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.047 [2024-12-05 21:20:55.911516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.047 [2024-12-05 21:20:55.923843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.047 [2024-12-05 21:20:55.923863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.047 [2024-12-05 21:20:55.923871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.047 [2024-12-05 21:20:55.936389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.047 [2024-12-05 21:20:55.936409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.047 [2024-12-05 21:20:55.936417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.047 [2024-12-05 21:20:55.944958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.047 [2024-12-05 21:20:55.944977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.047 [2024-12-05 21:20:55.944984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.047 [2024-12-05 21:20:55.955972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.047 [2024-12-05 21:20:55.955991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.047 [2024-12-05 21:20:55.955998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.047 [2024-12-05 21:20:55.967349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.047 [2024-12-05 21:20:55.967374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.047 [2024-12-05 21:20:55.967386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.047 [2024-12-05 21:20:55.976872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.047 [2024-12-05 21:20:55.976890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.047 [2024-12-05 21:20:55.976898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.047 [2024-12-05 21:20:55.985170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.047 [2024-12-05 21:20:55.985188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.047 [2024-12-05 21:20:55.985196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.047 [2024-12-05 21:20:55.997257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.047 [2024-12-05 21:20:55.997276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.047 [2024-12-05 21:20:55.997284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.047 [2024-12-05 21:20:56.009996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.047 [2024-12-05 21:20:56.010016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.047 [2024-12-05 21:20:56.010023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.047 [2024-12-05 21:20:56.020285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.047 [2024-12-05 21:20:56.020304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.048 [2024-12-05 21:20:56.020312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.048 [2024-12-05 21:20:56.029373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.048 [2024-12-05 21:20:56.029391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.048 [2024-12-05 21:20:56.029399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.048 [2024-12-05 21:20:56.040535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.048 [2024-12-05 21:20:56.040554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.048 [2024-12-05 21:20:56.040562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.048 [2024-12-05 21:20:56.049813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.048 [2024-12-05 21:20:56.049832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.048 [2024-12-05 21:20:56.049840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.048 [2024-12-05 21:20:56.060873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.048 [2024-12-05 21:20:56.060894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.048 [2024-12-05 21:20:56.060904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.048 [2024-12-05 21:20:56.069291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.048 [2024-12-05 21:20:56.069312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.048 [2024-12-05 21:20:56.069320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.048 [2024-12-05 21:20:56.080031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.048 [2024-12-05 21:20:56.080050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.048 [2024-12-05 21:20:56.080058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.048 [2024-12-05 21:20:56.089338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.048 [2024-12-05 21:20:56.089358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.048 [2024-12-05 21:20:56.089371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.048 [2024-12-05 21:20:56.099613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.048 [2024-12-05 21:20:56.099632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.048 [2024-12-05 21:20:56.099640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.048 [2024-12-05 21:20:56.109705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.048 [2024-12-05 21:20:56.109724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.048 [2024-12-05 21:20:56.109732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.048 [2024-12-05 21:20:56.120517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.048 [2024-12-05 21:20:56.120541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.048 [2024-12-05 21:20:56.120550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.048 [2024-12-05 21:20:56.129520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.048 [2024-12-05 21:20:56.129541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.048 [2024-12-05 21:20:56.129548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.048 [2024-12-05 21:20:56.139448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.048 [2024-12-05 21:20:56.139469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.048 [2024-12-05 21:20:56.139481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.048 [2024-12-05 21:20:56.147625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.048 [2024-12-05 21:20:56.147645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.048 [2024-12-05 21:20:56.147653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.306 [2024-12-05 21:20:56.157865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.157886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.157894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.167286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.167305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.167312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.177133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.177151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.177160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.185817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.185835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.185843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.198055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.198074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.198082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.210449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.210468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.210475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.218748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.218766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.218774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.230222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.230246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.230254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.239257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.239276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.239284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.248135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.248154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.248162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.257177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.257196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.257204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.265973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.265993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.266001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.277927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.277946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.277954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.289292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.289311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.289318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.297566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.297585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.297593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.307568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.307587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.307595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.315931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.315950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.315958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.327994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.328014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.328022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.340629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.340648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.340656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.352874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.352892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.352900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.364684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.364704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.364712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.376034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.376052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.376060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.386601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.386619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.386627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.395188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.395207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.395215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.307 [2024-12-05 21:20:56.406173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.307 [2024-12-05 21:20:56.406192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.307 [2024-12-05 21:20:56.406203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.566 25743.00 IOPS, 100.56 MiB/s [2024-12-05T20:20:56.674Z] [2024-12-05 21:20:56.418398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc52e0) 00:27:48.566 [2024-12-05 21:20:56.418418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.566 [2024-12-05 21:20:56.418426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.566 00:27:48.566 Latency(us) 00:27:48.566 [2024-12-05T20:20:56.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.566 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:48.566 nvme0n1 : 2.00 25761.08 100.63 0.00 0.00 4963.83 2371.78 15416.56 00:27:48.566 [2024-12-05T20:20:56.674Z] =================================================================================================================== 00:27:48.566 [2024-12-05T20:20:56.674Z] Total : 25761.08 100.63 0.00 0.00 4963.83 2371.78 15416.56 00:27:48.566 { 00:27:48.566 "results": [ 00:27:48.566 { 00:27:48.566 "job": "nvme0n1", 00:27:48.566 "core_mask": "0x2", 00:27:48.566 "workload": "randread", 00:27:48.566 "status": "finished", 00:27:48.566 "queue_depth": 128, 00:27:48.566 "io_size": 4096, 00:27:48.566 "runtime": 2.003953, 00:27:48.566 "iops": 25761.08321901761, 00:27:48.566 "mibps": 100.62923132428755, 00:27:48.566 "io_failed": 0, 00:27:48.566 "io_timeout": 0, 00:27:48.566 "avg_latency_us": 4963.8280041029275, 00:27:48.566 "min_latency_us": 2371.7790476190476, 00:27:48.566 "max_latency_us": 15416.56380952381 00:27:48.566 } 00:27:48.566 ], 00:27:48.566 "core_count": 1 00:27:48.566 } 00:27:48.566 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:48.566 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:48.566 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:48.566 | .driver_specific 00:27:48.566 | .nvme_error 00:27:48.566 | .status_code 00:27:48.566 | .command_transient_transport_error' 00:27:48.566 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:48.566 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 202 > 0 )) 00:27:48.566 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1457412 00:27:48.566 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1457412 ']' 00:27:48.566 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1457412 00:27:48.566 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:48.566 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:48.566 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1457412 00:27:48.825 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:48.825 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:48.825 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1457412' 00:27:48.825 killing process with pid 1457412 00:27:48.825 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1457412 00:27:48.825 Received shutdown signal, test time was about 2.000000 seconds 00:27:48.825 00:27:48.825 Latency(us) 00:27:48.825 [2024-12-05T20:20:56.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.825 [2024-12-05T20:20:56.933Z] =================================================================================================================== 00:27:48.825 [2024-12-05T20:20:56.933Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:48.825 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1457412 00:27:48.825 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:48.825 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:48.825 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:48.826 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:48.826 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:48.826 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1457925 00:27:48.826 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1457925 /var/tmp/bperf.sock 00:27:48.826 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:48.826 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1457925 ']' 00:27:48.826 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:48.826 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:48.826 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:48.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:48.826 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:48.826 21:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:48.826 [2024-12-05 21:20:56.906122] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:27:48.826 [2024-12-05 21:20:56.906171] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1457925 ] 00:27:48.826 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:48.826 Zero copy mechanism will not be used. 00:27:49.085 [2024-12-05 21:20:56.980763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.085 [2024-12-05 21:20:57.020042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.085 21:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.085 21:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:49.085 21:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:49.085 21:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:49.344 21:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:49.344 21:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.344 21:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:49.344 21:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.344 21:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:49.344 21:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:49.914 nvme0n1 00:27:49.914 21:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:49.914 21:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.914 21:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:49.914 21:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.914 21:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:49.914 21:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:49.914 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:49.914 Zero copy mechanism will not be used. 00:27:49.914 Running I/O for 2 seconds... 00:27:49.914 [2024-12-05 21:20:57.866794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.914 [2024-12-05 21:20:57.866829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.914 [2024-12-05 21:20:57.866840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.914 [2024-12-05 21:20:57.872296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.914 [2024-12-05 21:20:57.872321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.914 [2024-12-05 21:20:57.872330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.914 [2024-12-05 21:20:57.877580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.914 [2024-12-05 21:20:57.877605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.914 [2024-12-05 21:20:57.877613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.914 [2024-12-05 21:20:57.882902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.914 [2024-12-05 21:20:57.882923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.914 [2024-12-05 21:20:57.882932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.914 [2024-12-05 21:20:57.888141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.914 [2024-12-05 21:20:57.888162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.914 [2024-12-05 21:20:57.888170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.914 [2024-12-05 21:20:57.893491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.914 [2024-12-05 21:20:57.893511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.914 [2024-12-05 21:20:57.893519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.914 [2024-12-05 21:20:57.898838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.914 [2024-12-05 21:20:57.898858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.914 [2024-12-05 21:20:57.898866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.914 [2024-12-05 21:20:57.905142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.914 [2024-12-05 21:20:57.905165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.914 [2024-12-05 21:20:57.905173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.914 [2024-12-05 21:20:57.910657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.914 [2024-12-05 21:20:57.910678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.914 [2024-12-05 21:20:57.910686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.914 [2024-12-05 21:20:57.915929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.914 [2024-12-05 21:20:57.915950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.914 [2024-12-05 21:20:57.915958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.914 [2024-12-05 21:20:57.921240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.914 [2024-12-05 21:20:57.921262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.914 [2024-12-05 21:20:57.921271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.914 [2024-12-05 21:20:57.926564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.914 [2024-12-05 21:20:57.926584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.914 [2024-12-05 21:20:57.926592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.914 [2024-12-05 21:20:57.931886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.914 [2024-12-05 21:20:57.931907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.914 [2024-12-05 21:20:57.931915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.914 [2024-12-05 21:20:57.937250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.914 [2024-12-05 21:20:57.937270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.914 [2024-12-05 21:20:57.937278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.914 [2024-12-05 21:20:57.942632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.914 [2024-12-05 21:20:57.942652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.914 [2024-12-05 21:20:57.942664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.914 [2024-12-05 21:20:57.947899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.915 [2024-12-05 21:20:57.947919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.915 [2024-12-05 21:20:57.947927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.915 [2024-12-05 21:20:57.953149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.915 [2024-12-05 21:20:57.953170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.915 [2024-12-05 21:20:57.953178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.915 [2024-12-05 21:20:57.958400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.915 [2024-12-05 21:20:57.958420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.915 [2024-12-05 21:20:57.958428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.915 [2024-12-05 21:20:57.963648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.915 [2024-12-05 21:20:57.963668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.915 [2024-12-05 21:20:57.963675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.915 [2024-12-05 21:20:57.968790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.915 [2024-12-05 21:20:57.968811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.915 [2024-12-05 21:20:57.968818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.915 [2024-12-05 21:20:57.973973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.915 [2024-12-05 21:20:57.973993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.915 [2024-12-05 21:20:57.974001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.915 [2024-12-05 21:20:57.976718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.915 [2024-12-05 21:20:57.976739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.915 [2024-12-05 21:20:57.976747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.915 [2024-12-05 21:20:57.981951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.915 [2024-12-05 21:20:57.981971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.915 [2024-12-05 21:20:57.981978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.915 [2024-12-05 21:20:57.987121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.915 [2024-12-05 21:20:57.987144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.915 [2024-12-05 21:20:57.987152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.915 [2024-12-05 21:20:57.992558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.915 [2024-12-05 21:20:57.992577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.915 [2024-12-05 21:20:57.992585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.915 [2024-12-05 21:20:57.997849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.915 [2024-12-05 21:20:57.997869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.915 [2024-12-05 21:20:57.997877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.915 [2024-12-05 21:20:58.003386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.915 [2024-12-05 21:20:58.003407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.915 [2024-12-05 21:20:58.003416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.915 [2024-12-05 21:20:58.010287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.915 [2024-12-05 21:20:58.010308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.915 [2024-12-05 21:20:58.010316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.915 [2024-12-05 21:20:58.017752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:49.915 [2024-12-05 21:20:58.017774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.915 [2024-12-05 21:20:58.017783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.175 [2024-12-05 21:20:58.025626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.025648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.025656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.033719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.033741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.033750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.041980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.042002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.042010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.049080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.049102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.049110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.056191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.056213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.056221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.062141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.062162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.062170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.067543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.067564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.067572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.073026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.073046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.073054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.078520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.078540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.078548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.083849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.083870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.083878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.089497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.089517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.089525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.094909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.094929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.094943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.101064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.101084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.101093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.106686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.106707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.106715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.112062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.112083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.112091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.117491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.117511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.117520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.123326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.123347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.123355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.130235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.130259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.130267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.137247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.137269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.137277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.145080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.145104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.145113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.152666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.152687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.152695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.161177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.161197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.161205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.169859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.169881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.169889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.178002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.178024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.178032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.186115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.186137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.186145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.194167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.176 [2024-12-05 21:20:58.194190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.176 [2024-12-05 21:20:58.194199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.176 [2024-12-05 21:20:58.202332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.177 [2024-12-05 21:20:58.202354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.177 [2024-12-05 21:20:58.202362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.177 [2024-12-05 21:20:58.209602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.177 [2024-12-05 21:20:58.209625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.177 [2024-12-05 21:20:58.209633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.177 [2024-12-05 21:20:58.217653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.177 [2024-12-05 21:20:58.217692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.177 [2024-12-05 21:20:58.217704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.177 [2024-12-05 21:20:58.225880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.177 [2024-12-05 21:20:58.225903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.177 [2024-12-05 21:20:58.225913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.177 [2024-12-05 21:20:58.234687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.177 [2024-12-05 21:20:58.234710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.177 [2024-12-05 21:20:58.234718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.177 [2024-12-05 21:20:58.242855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.177 [2024-12-05 21:20:58.242878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.177 [2024-12-05 21:20:58.242886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.177 [2024-12-05 21:20:58.250598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.177 [2024-12-05 21:20:58.250622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.177 [2024-12-05 21:20:58.250632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.177 [2024-12-05 21:20:58.258492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.177 [2024-12-05 21:20:58.258514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.177 [2024-12-05 21:20:58.258522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.177 [2024-12-05 21:20:58.265876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.177 [2024-12-05 21:20:58.265898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.177 [2024-12-05 21:20:58.265906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.177 [2024-12-05 21:20:58.273145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.177 [2024-12-05 21:20:58.273167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.177 [2024-12-05 21:20:58.273175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.177 [2024-12-05 21:20:58.280683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.177 [2024-12-05 21:20:58.280705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.177 [2024-12-05 21:20:58.280714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.287032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.287058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.287066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.293709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.293731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.293739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.301994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.302015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.302023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.309675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.309698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.309706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.317058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.317080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.317088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.325469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.325490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.325498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.332175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.332196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.332204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.337555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.337575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.337583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.342874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.342894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.342902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.348165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.348186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.348193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.353608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.353629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.353636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.359063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.359084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.359092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.364483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.364504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.364511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.369788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.369809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.369817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.375148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.375169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.375177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.380532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.380554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.380562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.386010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.386031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.386039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.391516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.391536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.391547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.396954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.396976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.396983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.402254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.402275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.402283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.407553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.407574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.407582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.413003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.413023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.413031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.418390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.418410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.418418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.423721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.423741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.423748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.429258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.429279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.429286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.434845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.434866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.434873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.440193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.440217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.440225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.445542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.445563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.445570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.451005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.451024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.451032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.456276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.456296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.456304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.461819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.461840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.461848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.466987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.467008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.467016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.472586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.472607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.472615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.478445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.478465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.478473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.483803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.483824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.483832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.489154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.489176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.489184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.494457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.494477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.494485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.499961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.499982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.499989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.506064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.506085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.506092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.438 [2024-12-05 21:20:58.511631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.438 [2024-12-05 21:20:58.511650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.438 [2024-12-05 21:20:58.511657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.439 [2024-12-05 21:20:58.516928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.439 [2024-12-05 21:20:58.516948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.439 [2024-12-05 21:20:58.516956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.439 [2024-12-05 21:20:58.522106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.439 [2024-12-05 21:20:58.522127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.439 [2024-12-05 21:20:58.522135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.439 [2024-12-05 21:20:58.527201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.439 [2024-12-05 21:20:58.527221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.439 [2024-12-05 21:20:58.527229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.439 [2024-12-05 21:20:58.532345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.439 [2024-12-05 21:20:58.532365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.439 [2024-12-05 21:20:58.532381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.439 [2024-12-05 21:20:58.538489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.439 [2024-12-05 21:20:58.538511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.439 [2024-12-05 21:20:58.538519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.699 [2024-12-05 21:20:58.544170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.699 [2024-12-05 21:20:58.544193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.699 [2024-12-05 21:20:58.544201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.699 [2024-12-05 21:20:58.549697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.699 [2024-12-05 21:20:58.549719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.699 [2024-12-05 21:20:58.549726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.699 [2024-12-05 21:20:58.555131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.699 [2024-12-05 21:20:58.555152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.699 [2024-12-05 21:20:58.555160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.699 [2024-12-05 21:20:58.560509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.699 [2024-12-05 21:20:58.560529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.699 [2024-12-05 21:20:58.560536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.699 [2024-12-05 21:20:58.565792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.699 [2024-12-05 21:20:58.565813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.699 [2024-12-05 21:20:58.565820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.699 [2024-12-05 21:20:58.571090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.699 [2024-12-05 21:20:58.571110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.699 [2024-12-05 21:20:58.571118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.699 [2024-12-05 21:20:58.576436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.699 [2024-12-05 21:20:58.576456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.699 [2024-12-05 21:20:58.576464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.699 [2024-12-05 21:20:58.582783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.699 [2024-12-05 21:20:58.582804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.699 [2024-12-05 21:20:58.582812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.699 [2024-12-05 21:20:58.590060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.699 [2024-12-05 21:20:58.590082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.699 [2024-12-05 21:20:58.590090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.699 [2024-12-05 21:20:58.596859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.699 [2024-12-05 21:20:58.596879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.699 [2024-12-05 21:20:58.596887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.603420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.603441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.603449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.609933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.609954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.609962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.616547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.616578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.616586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.623719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.623740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.623749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.631204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.631226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.631234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.637684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.637705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.637717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.644167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.644189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.644197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.649830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.649850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.649858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.655215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.655236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.655244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.660631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.660650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.660658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.665971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.665991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.665999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.672205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.672226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.672233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.679197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.679218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.679226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.686717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.686738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.686746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.693167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.693191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.693199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.699607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.699627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.699635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.706407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.706428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.706436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.712737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.712757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.712765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.719175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.719196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.719203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.725453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.725474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.725481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.731411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.731432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.731440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.739255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.739277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.739285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.745608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.745629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.745638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.751137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.751156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.751165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.756771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.756791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.756800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.762240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.762261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.762269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.700 [2024-12-05 21:20:58.767647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.700 [2024-12-05 21:20:58.767667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.700 [2024-12-05 21:20:58.767675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.701 [2024-12-05 21:20:58.773083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.701 [2024-12-05 21:20:58.773104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.701 [2024-12-05 21:20:58.773112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.701 [2024-12-05 21:20:58.778703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.701 [2024-12-05 21:20:58.778723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.701 [2024-12-05 21:20:58.778730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.701 [2024-12-05 21:20:58.784240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.701 [2024-12-05 21:20:58.784260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.701 [2024-12-05 21:20:58.784268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.701 [2024-12-05 21:20:58.789941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.701 [2024-12-05 21:20:58.789961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.701 [2024-12-05 21:20:58.789969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.701 [2024-12-05 21:20:58.795299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.701 [2024-12-05 21:20:58.795319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.701 [2024-12-05 21:20:58.795330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.701 [2024-12-05 21:20:58.800643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.701 [2024-12-05 21:20:58.800664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.701 [2024-12-05 21:20:58.800671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.961 [2024-12-05 21:20:58.806363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.961 [2024-12-05 21:20:58.806395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.961 [2024-12-05 21:20:58.806403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.961 [2024-12-05 21:20:58.811698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.961 [2024-12-05 21:20:58.811729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.961 [2024-12-05 21:20:58.811737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.961 [2024-12-05 21:20:58.817289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.961 [2024-12-05 21:20:58.817309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.961 [2024-12-05 21:20:58.817317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.961 [2024-12-05 21:20:58.822740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.961 [2024-12-05 21:20:58.822761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.822769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.828352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.828379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.828387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.834066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.834086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.834094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.839814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.839835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.839843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.845470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.845496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.845504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.851012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.851033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.851041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.856426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.856446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.856454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.861747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.861768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.861776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.962 5106.00 IOPS, 638.25 MiB/s [2024-12-05T20:20:59.070Z] [2024-12-05 21:20:58.868043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.868064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.868072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.873352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.873379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.873387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.878734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.878754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.878763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.884633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.884653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.884662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.890181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.890201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.890209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.895535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.895555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.895563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.900788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.900809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.900817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.906110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.906130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.906138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.911365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.911392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.911400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.916585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.916606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.916613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.921794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.921814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.921822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.926988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.927009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.927017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.932114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.932134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.932142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.937314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.937338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.937346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.942544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.942575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.942583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.947758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.947778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.947786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.953031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.953051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.953058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.958260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.958280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.958288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.963477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.962 [2024-12-05 21:20:58.963497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.962 [2024-12-05 21:20:58.963504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.962 [2024-12-05 21:20:58.968674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:58.968695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:58.968703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.963 [2024-12-05 21:20:58.973965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:58.973985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:58.973993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.963 [2024-12-05 21:20:58.979202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:58.979221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:58.979229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.963 [2024-12-05 21:20:58.984439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:58.984460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:58.984468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.963 [2024-12-05 21:20:58.989720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:58.989740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:58.989747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.963 [2024-12-05 21:20:58.994983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:58.995003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:58.995011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.963 [2024-12-05 21:20:59.000200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:59.000221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:59.000228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.963 [2024-12-05 21:20:59.005389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:59.005409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:59.005417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.963 [2024-12-05 21:20:59.010564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:59.010585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:59.010593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.963 [2024-12-05 21:20:59.015722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:59.015742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:59.015749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.963 [2024-12-05 21:20:59.020976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:59.020996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:59.021003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.963 [2024-12-05 21:20:59.026154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:59.026174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:59.026185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.963 [2024-12-05 21:20:59.031336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:59.031356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:59.031364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.963 [2024-12-05 21:20:59.036602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:59.036622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:59.036630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.963 [2024-12-05 21:20:59.041834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:59.041854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:59.041861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.963 [2024-12-05 21:20:59.046971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:59.046991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:59.046999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.963 [2024-12-05 21:20:59.052166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:59.052186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:59.052193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.963 [2024-12-05 21:20:59.057366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:59.057396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:59.057404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.963 [2024-12-05 21:20:59.062529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:50.963 [2024-12-05 21:20:59.062550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.963 [2024-12-05 21:20:59.062558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.224 [2024-12-05 21:20:59.067802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.224 [2024-12-05 21:20:59.067822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.224 [2024-12-05 21:20:59.067830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.224 [2024-12-05 21:20:59.073043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.224 [2024-12-05 21:20:59.073067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.224 [2024-12-05 21:20:59.073075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.224 [2024-12-05 21:20:59.078275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.224 [2024-12-05 21:20:59.078295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.224 [2024-12-05 21:20:59.078303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.224 [2024-12-05 21:20:59.083512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.224 [2024-12-05 21:20:59.083531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.224 [2024-12-05 21:20:59.083540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.224 [2024-12-05 21:20:59.088759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.224 [2024-12-05 21:20:59.088779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.224 [2024-12-05 21:20:59.088787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.224 [2024-12-05 21:20:59.093916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.093936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.093944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.099186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.099206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.099214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.104474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.104493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.104501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.109692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.109713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.109721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.114887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.114907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.114915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.120144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.120163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.120170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.125386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.125406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.125414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.130593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.130613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.130620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.135871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.135891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.135899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.141109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.141129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.141137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.146314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.146335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.146342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.151528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.151548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.151556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.156470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.156491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.156499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.161712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.161733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.161746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.164951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.164970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.164978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.169406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.169427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.169434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.176140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.176160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.176168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.182621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.182642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.182650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.189853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.189875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.189883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.197455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.197477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.197484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.205363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.205390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.205398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.211637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.211657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.211665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.217584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.217606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.217615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.223311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.223332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.223339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.228557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.228579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.228586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.233830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.233852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.233860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.239053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.225 [2024-12-05 21:20:59.239074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.225 [2024-12-05 21:20:59.239082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.225 [2024-12-05 21:20:59.244357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.226 [2024-12-05 21:20:59.244385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.226 [2024-12-05 21:20:59.244393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.226 [2024-12-05 21:20:59.249560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.226 [2024-12-05 21:20:59.249581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.226 [2024-12-05 21:20:59.249589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.226 [2024-12-05 21:20:59.254764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.226 [2024-12-05 21:20:59.254784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.226 [2024-12-05 21:20:59.254792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.226 [2024-12-05 21:20:59.259944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.226 [2024-12-05 21:20:59.259964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.226 [2024-12-05 21:20:59.259976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.226 [2024-12-05 21:20:59.265205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.226 [2024-12-05 21:20:59.265225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.226 [2024-12-05 21:20:59.265233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.226 [2024-12-05 21:20:59.270415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.226 [2024-12-05 21:20:59.270435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.226 [2024-12-05 21:20:59.270443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.226 [2024-12-05 21:20:59.275564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.226 [2024-12-05 21:20:59.275584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.226 [2024-12-05 21:20:59.275592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.226 [2024-12-05 21:20:59.280750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.226 [2024-12-05 21:20:59.280771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.226 [2024-12-05 21:20:59.280779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.226 [2024-12-05 21:20:59.285946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.226 [2024-12-05 21:20:59.285966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.226 [2024-12-05 21:20:59.285974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.226 [2024-12-05 21:20:59.291160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.226 [2024-12-05 21:20:59.291181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.226 [2024-12-05 21:20:59.291188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.226 [2024-12-05 21:20:59.296447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.226 [2024-12-05 21:20:59.296467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.226 [2024-12-05 21:20:59.296475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.226 [2024-12-05 21:20:59.301674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.226 [2024-12-05 21:20:59.301694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.226 [2024-12-05 21:20:59.301702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.226 [2024-12-05 21:20:59.306948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.226 [2024-12-05 21:20:59.306972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.226 [2024-12-05 21:20:59.306980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.226 [2024-12-05 21:20:59.312659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.226 [2024-12-05 21:20:59.312680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.226 [2024-12-05 21:20:59.312688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.226 [2024-12-05 21:20:59.317931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.226 [2024-12-05 21:20:59.317952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.226 [2024-12-05 21:20:59.317960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.226 [2024-12-05 21:20:59.323127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.226 [2024-12-05 21:20:59.323148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.226 [2024-12-05 21:20:59.323156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.226 [2024-12-05 21:20:59.329095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.226 [2024-12-05 21:20:59.329117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.226 [2024-12-05 21:20:59.329125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.487 [2024-12-05 21:20:59.334815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.487 [2024-12-05 21:20:59.334837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.487 [2024-12-05 21:20:59.334845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.487 [2024-12-05 21:20:59.340032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.487 [2024-12-05 21:20:59.340052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.487 [2024-12-05 21:20:59.340060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.487 [2024-12-05 21:20:59.345250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.487 [2024-12-05 21:20:59.345272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.487 [2024-12-05 21:20:59.345281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.487 [2024-12-05 21:20:59.350512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.487 [2024-12-05 21:20:59.350532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.487 [2024-12-05 21:20:59.350540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.487 [2024-12-05 21:20:59.355758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.487 [2024-12-05 21:20:59.355778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.487 [2024-12-05 21:20:59.355785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.487 [2024-12-05 21:20:59.361056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.487 [2024-12-05 21:20:59.361076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.487 [2024-12-05 21:20:59.361084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.487 [2024-12-05 21:20:59.366289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.487 [2024-12-05 21:20:59.366310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.487 [2024-12-05 21:20:59.366317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.487 [2024-12-05 21:20:59.371586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.487 [2024-12-05 21:20:59.371607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.487 [2024-12-05 21:20:59.371615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.487 [2024-12-05 21:20:59.377003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.487 [2024-12-05 21:20:59.377024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.487 [2024-12-05 21:20:59.377032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.487 [2024-12-05 21:20:59.383436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.487 [2024-12-05 21:20:59.383457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.487 [2024-12-05 21:20:59.383466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.487 [2024-12-05 21:20:59.388657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.487 [2024-12-05 21:20:59.388678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.487 [2024-12-05 21:20:59.388685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.487 [2024-12-05 21:20:59.393721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.487 [2024-12-05 21:20:59.393741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.487 [2024-12-05 21:20:59.393749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.487 [2024-12-05 21:20:59.398983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.487 [2024-12-05 21:20:59.399004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.487 [2024-12-05 21:20:59.399015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.487 [2024-12-05 21:20:59.404303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.487 [2024-12-05 21:20:59.404323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.487 [2024-12-05 21:20:59.404330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.487 [2024-12-05 21:20:59.409584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.487 [2024-12-05 21:20:59.409604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.409613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.414858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.414879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.414887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.420215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.420235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.420243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.425477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.425497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.425505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.430702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.430722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.430730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.435939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.435959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.435967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.441174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.441194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.441202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.446425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.446445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.446452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.451801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.451820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.451828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.457062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.457081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.457090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.462267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.462287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.462295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.467471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.467490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.467498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.472705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.472726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.472733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.477892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.477912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.477920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.483111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.483131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.483139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.488280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.488300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.488311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.493455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.493475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.493483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.498650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.498671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.498679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.503904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.503924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.503932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.509196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.509216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.509224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.514354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.514382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.514390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.519610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.519631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.519639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.524946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.524966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.524975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.488 [2024-12-05 21:20:59.530159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.488 [2024-12-05 21:20:59.530179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.488 [2024-12-05 21:20:59.530187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.489 [2024-12-05 21:20:59.535348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.489 [2024-12-05 21:20:59.535378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.489 [2024-12-05 21:20:59.535387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.489 [2024-12-05 21:20:59.540969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.489 [2024-12-05 21:20:59.540990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.489 [2024-12-05 21:20:59.540997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.489 [2024-12-05 21:20:59.548583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.489 [2024-12-05 21:20:59.548605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.489 [2024-12-05 21:20:59.548613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.489 [2024-12-05 21:20:59.556353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.489 [2024-12-05 21:20:59.556382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.489 [2024-12-05 21:20:59.556390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.489 [2024-12-05 21:20:59.564009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.489 [2024-12-05 21:20:59.564031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.489 [2024-12-05 21:20:59.564039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.489 [2024-12-05 21:20:59.572670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.489 [2024-12-05 21:20:59.572692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.489 [2024-12-05 21:20:59.572700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.489 [2024-12-05 21:20:59.580444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.489 [2024-12-05 21:20:59.580465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.489 [2024-12-05 21:20:59.580473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.489 [2024-12-05 21:20:59.588616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.489 [2024-12-05 21:20:59.588638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.489 [2024-12-05 21:20:59.588646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.749 [2024-12-05 21:20:59.596422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.749 [2024-12-05 21:20:59.596444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.749 [2024-12-05 21:20:59.596452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.749 [2024-12-05 21:20:59.603992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.749 [2024-12-05 21:20:59.604013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.749 [2024-12-05 21:20:59.604021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.749 [2024-12-05 21:20:59.612414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.749 [2024-12-05 21:20:59.612436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.749 [2024-12-05 21:20:59.612445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.749 [2024-12-05 21:20:59.619750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.749 [2024-12-05 21:20:59.619771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.749 [2024-12-05 21:20:59.619780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.749 [2024-12-05 21:20:59.627855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.749 [2024-12-05 21:20:59.627876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.749 [2024-12-05 21:20:59.627884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.749 [2024-12-05 21:20:59.635340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.749 [2024-12-05 21:20:59.635363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.749 [2024-12-05 21:20:59.635377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.749 [2024-12-05 21:20:59.642906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.749 [2024-12-05 21:20:59.642929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.749 [2024-12-05 21:20:59.642938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.749 [2024-12-05 21:20:59.650409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.749 [2024-12-05 21:20:59.650431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.749 [2024-12-05 21:20:59.650439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.749 [2024-12-05 21:20:59.657715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.749 [2024-12-05 21:20:59.657737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.749 [2024-12-05 21:20:59.657746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.749 [2024-12-05 21:20:59.664773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.749 [2024-12-05 21:20:59.664794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.749 [2024-12-05 21:20:59.664806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.749 [2024-12-05 21:20:59.672214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.749 [2024-12-05 21:20:59.672237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.749 [2024-12-05 21:20:59.672245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.749 [2024-12-05 21:20:59.679382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.749 [2024-12-05 21:20:59.679405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.749 [2024-12-05 21:20:59.679412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.686046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.686069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.686078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.693358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.693387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.693395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.700332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.700352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.700361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.705924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.705945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.705953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.711155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.711176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.711184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.717075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.717096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.717104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.724955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.724981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.724989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.732324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.732347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.732355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.739792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.739814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.739822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.746989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.747012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.747020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.754572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.754594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.754603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.762614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.762637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.762645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.770724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.770745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.770754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.777598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.777620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.777629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.784304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.784326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.784338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.791618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.791641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.791650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.798081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.798102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.798110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.806084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.806106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.806114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.814122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.814143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.814151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.821671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.821692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.821700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.829068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.829090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.829098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.835712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.835734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.835742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.843271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.843293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.750 [2024-12-05 21:20:59.843301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.750 [2024-12-05 21:20:59.851206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:51.750 [2024-12-05 21:20:59.851233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.751 [2024-12-05 21:20:59.851241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.010 [2024-12-05 21:20:59.858874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:52.010 [2024-12-05 21:20:59.858896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.010 [2024-12-05 21:20:59.858904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.010 [2024-12-05 21:20:59.864782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dcdd0) 00:27:52.010 [2024-12-05 21:20:59.864803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.010 [2024-12-05 21:20:59.864811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.010 5197.00 IOPS, 649.62 MiB/s 00:27:52.010 Latency(us) 00:27:52.010 [2024-12-05T20:21:00.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.010 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:52.010 nvme0n1 : 2.00 5200.71 650.09 0.00 0.00 3073.90 702.17 8800.55 00:27:52.010 [2024-12-05T20:21:00.118Z] =================================================================================================================== 00:27:52.010 [2024-12-05T20:21:00.118Z] Total : 5200.71 650.09 0.00 0.00 3073.90 702.17 8800.55 00:27:52.011 { 00:27:52.011 "results": [ 00:27:52.011 { 00:27:52.011 "job": "nvme0n1", 00:27:52.011 "core_mask": "0x2", 00:27:52.011 "workload": "randread", 00:27:52.011 "status": "finished", 00:27:52.011 "queue_depth": 16, 00:27:52.011 "io_size": 131072, 00:27:52.011 "runtime": 2.00165, 00:27:52.011 "iops": 5200.709414732845, 00:27:52.011 "mibps": 650.0886768416057, 00:27:52.011 "io_failed": 0, 00:27:52.011 "io_timeout": 0, 00:27:52.011 "avg_latency_us": 3073.904946342802, 00:27:52.011 "min_latency_us": 702.1714285714286, 00:27:52.011 "max_latency_us": 8800.548571428571 00:27:52.011 } 00:27:52.011 ], 00:27:52.011 "core_count": 1 00:27:52.011 } 00:27:52.011 21:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:52.011 21:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:52.011 21:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:52.011 | .driver_specific 00:27:52.011 | .nvme_error 00:27:52.011 | .status_code 00:27:52.011 | .command_transient_transport_error' 00:27:52.011 21:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:52.011 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 336 > 0 )) 00:27:52.011 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1457925 00:27:52.011 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1457925 ']' 00:27:52.011 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1457925 00:27:52.011 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:52.011 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:52.011 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1457925 00:27:52.270 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:52.270 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:52.270 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1457925' 00:27:52.270 killing process with pid 1457925 00:27:52.270 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1457925 00:27:52.270 Received shutdown signal, test time was about 2.000000 seconds 00:27:52.270 00:27:52.270 Latency(us) 00:27:52.270 [2024-12-05T20:21:00.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.270 [2024-12-05T20:21:00.378Z] =================================================================================================================== 00:27:52.270 [2024-12-05T20:21:00.378Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:52.270 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1457925 00:27:52.270 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:52.270 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:52.270 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:52.270 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:52.270 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:52.270 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1458605 00:27:52.270 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1458605 /var/tmp/bperf.sock 00:27:52.270 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:52.270 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1458605 ']' 00:27:52.271 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:52.271 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:52.271 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:52.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:52.271 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:52.271 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:52.271 [2024-12-05 21:21:00.358253] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:27:52.271 [2024-12-05 21:21:00.358305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458605 ] 00:27:52.530 [2024-12-05 21:21:00.434744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.530 [2024-12-05 21:21:00.472421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.530 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:52.530 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:52.530 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:52.530 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:52.790 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:52.790 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.790 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:52.790 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.790 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:52.790 21:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.049 nvme0n1 00:27:53.049 21:21:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:53.049 21:21:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.049 21:21:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:53.049 21:21:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.049 21:21:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:53.049 21:21:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:53.310 Running I/O for 2 seconds... 00:27:53.310 [2024-12-05 21:21:01.181654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee49b0 00:27:53.310 [2024-12-05 21:21:01.182977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.183007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.191699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eefae0 00:27:53.310 [2024-12-05 21:21:01.193195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.193218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.201848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efa7d8 00:27:53.310 [2024-12-05 21:21:01.203484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.203505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.208672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee8d30 00:27:53.310 [2024-12-05 21:21:01.209355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.209379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.218412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee01f8 00:27:53.310 [2024-12-05 21:21:01.219348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.219371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.227948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ede470 00:27:53.310 [2024-12-05 21:21:01.229045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.229065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.237384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef81e0 00:27:53.310 [2024-12-05 21:21:01.238591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.238611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.246774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef2948 00:27:53.310 [2024-12-05 21:21:01.247996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.248016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.255125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eea248 00:27:53.310 [2024-12-05 21:21:01.256077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.256097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.263848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eef270 00:27:53.310 [2024-12-05 21:21:01.264922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.264941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.274752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef7970 00:27:53.310 [2024-12-05 21:21:01.276311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.276330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.281136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eee190 00:27:53.310 [2024-12-05 21:21:01.282001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.282020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.291993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eebfd0 00:27:53.310 [2024-12-05 21:21:01.293287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.293306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.299902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee01f8 00:27:53.310 [2024-12-05 21:21:01.300711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.300730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.309486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eef270 00:27:53.310 [2024-12-05 21:21:01.310631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.310650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.318745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eebfd0 00:27:53.310 [2024-12-05 21:21:01.319437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.319457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.327584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef5378 00:27:53.310 [2024-12-05 21:21:01.328591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.328611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.336119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eef270 00:27:53.310 [2024-12-05 21:21:01.336813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.336832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.345384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efa7d8 00:27:53.310 [2024-12-05 21:21:01.346376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.346396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.354656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef0788 00:27:53.310 [2024-12-05 21:21:01.355767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.355785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.363901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef6890 00:27:53.310 [2024-12-05 21:21:01.365071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.365090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.372325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eeff18 00:27:53.310 [2024-12-05 21:21:01.373252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.310 [2024-12-05 21:21:01.373272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:53.310 [2024-12-05 21:21:01.381409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee9168 00:27:53.311 [2024-12-05 21:21:01.382416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.311 [2024-12-05 21:21:01.382438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:53.311 [2024-12-05 21:21:01.390681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef0350 00:27:53.311 [2024-12-05 21:21:01.391829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.311 [2024-12-05 21:21:01.391848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:53.311 [2024-12-05 21:21:01.399955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef2510 00:27:53.311 [2024-12-05 21:21:01.401141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.311 [2024-12-05 21:21:01.401160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:53.311 [2024-12-05 21:21:01.409219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eedd58 00:27:53.311 [2024-12-05 21:21:01.410528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.311 [2024-12-05 21:21:01.410546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:53.311 [2024-12-05 21:21:01.415818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef81e0 00:27:53.571 [2024-12-05 21:21:01.416427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.416448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.425278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef0350 00:27:53.571 [2024-12-05 21:21:01.426017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.426036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.434406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef8e88 00:27:53.571 [2024-12-05 21:21:01.435148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.435168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.443289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eed0b0 00:27:53.571 [2024-12-05 21:21:01.444114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.444133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.454161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee7818 00:27:53.571 [2024-12-05 21:21:01.455339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.455358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.460981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eee190 00:27:53.571 [2024-12-05 21:21:01.461641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.461663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.470343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef57b0 00:27:53.571 [2024-12-05 21:21:01.471090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.471109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.481357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efbcf0 00:27:53.571 [2024-12-05 21:21:01.482633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.482652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.490615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef7970 00:27:53.571 [2024-12-05 21:21:01.492047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.492065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.499657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee4578 00:27:53.571 [2024-12-05 21:21:01.501037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.501056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.507054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee8d30 00:27:53.571 [2024-12-05 21:21:01.507678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.507697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.516248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef2d80 00:27:53.571 [2024-12-05 21:21:01.517001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.517020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.524754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee5220 00:27:53.571 [2024-12-05 21:21:01.526079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.526097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.533059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef96f8 00:27:53.571 [2024-12-05 21:21:01.533671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.533690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.541306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee88f8 00:27:53.571 [2024-12-05 21:21:01.542002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.542021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.550652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eee190 00:27:53.571 [2024-12-05 21:21:01.551458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.551477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.560049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee7c50 00:27:53.571 [2024-12-05 21:21:01.560941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.560959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.569366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee6fa8 00:27:53.571 [2024-12-05 21:21:01.570398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.570417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.578202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eecc78 00:27:53.571 [2024-12-05 21:21:01.578937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.578956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:53.571 [2024-12-05 21:21:01.587288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef35f0 00:27:53.571 [2024-12-05 21:21:01.588314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.571 [2024-12-05 21:21:01.588332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.572 [2024-12-05 21:21:01.596532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016edfdc0 00:27:53.572 [2024-12-05 21:21:01.597673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.572 [2024-12-05 21:21:01.597691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:53.572 [2024-12-05 21:21:01.605795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eebb98 00:27:53.572 [2024-12-05 21:21:01.607048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.572 [2024-12-05 21:21:01.607066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.572 [2024-12-05 21:21:01.615049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee38d0 00:27:53.572 [2024-12-05 21:21:01.616441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.572 [2024-12-05 21:21:01.616468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:53.572 [2024-12-05 21:21:01.624449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee5ec8 00:27:53.572 [2024-12-05 21:21:01.626015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.572 [2024-12-05 21:21:01.626033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.572 [2024-12-05 21:21:01.630838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eea680 00:27:53.572 [2024-12-05 21:21:01.631546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.572 [2024-12-05 21:21:01.631565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:53.572 [2024-12-05 21:21:01.639388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef2510 00:27:53.572 [2024-12-05 21:21:01.640070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.572 [2024-12-05 21:21:01.640089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:53.572 [2024-12-05 21:21:01.648724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eeaef0 00:27:53.572 [2024-12-05 21:21:01.649567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.572 [2024-12-05 21:21:01.649586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.572 [2024-12-05 21:21:01.660380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef20d8 00:27:53.572 [2024-12-05 21:21:01.661877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.572 [2024-12-05 21:21:01.661895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.572 [2024-12-05 21:21:01.666631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee23b8 00:27:53.572 [2024-12-05 21:21:01.667292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.572 [2024-12-05 21:21:01.667311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:53.572 [2024-12-05 21:21:01.676031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efda78 00:27:53.572 [2024-12-05 21:21:01.676856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.572 [2024-12-05 21:21:01.676874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:53.832 [2024-12-05 21:21:01.684619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eed0b0 00:27:53.832 [2024-12-05 21:21:01.685431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.832 [2024-12-05 21:21:01.685451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.832 [2024-12-05 21:21:01.694140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eef6a8 00:27:53.832 [2024-12-05 21:21:01.695060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.832 [2024-12-05 21:21:01.695085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:53.832 [2024-12-05 21:21:01.703650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee73e0 00:27:53.832 [2024-12-05 21:21:01.704703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.832 [2024-12-05 21:21:01.704722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.832 [2024-12-05 21:21:01.713103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee95a0 00:27:53.832 [2024-12-05 21:21:01.714219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.832 [2024-12-05 21:21:01.714238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:53.832 [2024-12-05 21:21:01.722440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eed0b0 00:27:53.832 [2024-12-05 21:21:01.723632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.832 [2024-12-05 21:21:01.723651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.832 [2024-12-05 21:21:01.731703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef8a50 00:27:53.832 [2024-12-05 21:21:01.733018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.832 [2024-12-05 21:21:01.733036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:53.832 [2024-12-05 21:21:01.739575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef46d0 00:27:53.832 [2024-12-05 21:21:01.740441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.832 [2024-12-05 21:21:01.740459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:53.832 [2024-12-05 21:21:01.748299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee6b70 00:27:53.832 [2024-12-05 21:21:01.749155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.832 [2024-12-05 21:21:01.749173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:53.832 [2024-12-05 21:21:01.757152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef6458 00:27:53.832 [2024-12-05 21:21:01.758017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.832 [2024-12-05 21:21:01.758035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:53.832 [2024-12-05 21:21:01.766278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efdeb0 00:27:53.832 [2024-12-05 21:21:01.767317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.832 [2024-12-05 21:21:01.767337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:53.832 [2024-12-05 21:21:01.775730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eec840 00:27:53.832 [2024-12-05 21:21:01.777049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.832 [2024-12-05 21:21:01.777068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:53.832 [2024-12-05 21:21:01.785084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efe720 00:27:53.832 [2024-12-05 21:21:01.786499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.832 [2024-12-05 21:21:01.786517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:53.832 [2024-12-05 21:21:01.794327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee4de8 00:27:53.832 [2024-12-05 21:21:01.795851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.832 [2024-12-05 21:21:01.795870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:53.832 [2024-12-05 21:21:01.800592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef3a28 00:27:53.832 [2024-12-05 21:21:01.801279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.832 [2024-12-05 21:21:01.801297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:53.833 [2024-12-05 21:21:01.808963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee95a0 00:27:53.833 [2024-12-05 21:21:01.809680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.833 [2024-12-05 21:21:01.809698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:53.833 [2024-12-05 21:21:01.818269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef1ca0 00:27:53.833 [2024-12-05 21:21:01.819088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.833 [2024-12-05 21:21:01.819106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:53.833 [2024-12-05 21:21:01.829045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee8088 00:27:53.833 [2024-12-05 21:21:01.830252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.833 [2024-12-05 21:21:01.830271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:53.833 [2024-12-05 21:21:01.836605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee84c0 00:27:53.833 [2024-12-05 21:21:01.837136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.833 [2024-12-05 21:21:01.837155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:53.833 [2024-12-05 21:21:01.845963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef57b0 00:27:53.833 [2024-12-05 21:21:01.846590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.833 [2024-12-05 21:21:01.846608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:53.833 [2024-12-05 21:21:01.855290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee6b70 00:27:53.833 [2024-12-05 21:21:01.856060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.833 [2024-12-05 21:21:01.856078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:53.833 [2024-12-05 21:21:01.863930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee0630 00:27:53.833 [2024-12-05 21:21:01.864923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.833 [2024-12-05 21:21:01.864942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:53.833 [2024-12-05 21:21:01.872826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eddc00 00:27:53.833 [2024-12-05 21:21:01.873790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.833 [2024-12-05 21:21:01.873809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:53.833 [2024-12-05 21:21:01.882121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef0788 00:27:53.833 [2024-12-05 21:21:01.883149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.833 [2024-12-05 21:21:01.883169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:53.833 [2024-12-05 21:21:01.892248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee9168 00:27:53.833 [2024-12-05 21:21:01.893705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.833 [2024-12-05 21:21:01.893723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:53.833 [2024-12-05 21:21:01.898498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee0a68 00:27:53.833 [2024-12-05 21:21:01.899122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.833 [2024-12-05 21:21:01.899141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:53.833 [2024-12-05 21:21:01.908005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eec840 00:27:53.833 [2024-12-05 21:21:01.908948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.833 [2024-12-05 21:21:01.908967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:53.833 [2024-12-05 21:21:01.917260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef4298 00:27:53.833 [2024-12-05 21:21:01.918316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.833 [2024-12-05 21:21:01.918334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:53.833 [2024-12-05 21:21:01.926512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef0788 00:27:53.833 [2024-12-05 21:21:01.927694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.833 [2024-12-05 21:21:01.927715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:53.833 [2024-12-05 21:21:01.935906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efd640 00:27:53.833 [2024-12-05 21:21:01.937230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.833 [2024-12-05 21:21:01.937249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:01.945441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ede470 00:27:54.093 [2024-12-05 21:21:01.946889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:01.946907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:01.954824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef31b8 00:27:54.093 [2024-12-05 21:21:01.956402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:01.956421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:01.961208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef4b08 00:27:54.093 [2024-12-05 21:21:01.961948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:01.961967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:01.969664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef46d0 00:27:54.093 [2024-12-05 21:21:01.970317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:01.970336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:01.980090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef7970 00:27:54.093 [2024-12-05 21:21:01.980983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:01.981002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:01.988319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eec840 00:27:54.093 [2024-12-05 21:21:01.989195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:01.989213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:01.997591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efc998 00:27:54.093 [2024-12-05 21:21:01.998582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:01.998600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:02.006324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016edfdc0 00:27:54.093 [2024-12-05 21:21:02.007620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:02.007640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:02.013911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee6b70 00:27:54.093 [2024-12-05 21:21:02.014600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:02.014618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:02.022879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef7100 00:27:54.093 [2024-12-05 21:21:02.023573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:02.023591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:02.033286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016edf118 00:27:54.093 [2024-12-05 21:21:02.034381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:02.034399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:02.042632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef0bc0 00:27:54.093 [2024-12-05 21:21:02.043851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:02.043870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:02.050332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eef6a8 00:27:54.093 [2024-12-05 21:21:02.050875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:02.050894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:02.059672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efb048 00:27:54.093 [2024-12-05 21:21:02.060330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:02.060348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:02.069099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee7818 00:27:54.093 [2024-12-05 21:21:02.069858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:02.069877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:02.077722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee1710 00:27:54.093 [2024-12-05 21:21:02.078775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:02.078793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:02.086483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efa7d8 00:27:54.093 [2024-12-05 21:21:02.087423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:02.087442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:02.095729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eeaef0 00:27:54.093 [2024-12-05 21:21:02.096788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:02.096806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:02.104987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efb048 00:27:54.093 [2024-12-05 21:21:02.106212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.093 [2024-12-05 21:21:02.106231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:54.093 [2024-12-05 21:21:02.112277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef7538 00:27:54.094 [2024-12-05 21:21:02.112924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.094 [2024-12-05 21:21:02.112942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:54.094 [2024-12-05 21:21:02.121400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efb8b8 00:27:54.094 [2024-12-05 21:21:02.122236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.094 [2024-12-05 21:21:02.122254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:54.094 [2024-12-05 21:21:02.130651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee9168 00:27:54.094 [2024-12-05 21:21:02.131521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.094 [2024-12-05 21:21:02.131540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:54.094 [2024-12-05 21:21:02.139889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee4140 00:27:54.094 [2024-12-05 21:21:02.140941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.094 [2024-12-05 21:21:02.140960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:54.094 [2024-12-05 21:21:02.149144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eeff18 00:27:54.094 [2024-12-05 21:21:02.150314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.094 [2024-12-05 21:21:02.150332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:54.094 [2024-12-05 21:21:02.156426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef6cc8 00:27:54.094 [2024-12-05 21:21:02.157117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.094 [2024-12-05 21:21:02.157138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:54.094 [2024-12-05 21:21:02.164612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee0630 00:27:54.094 [2024-12-05 21:21:02.165299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.094 [2024-12-05 21:21:02.165317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:54.094 28325.00 IOPS, 110.64 MiB/s [2024-12-05T20:21:02.202Z] [2024-12-05 21:21:02.173835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee1f80 00:27:54.094 [2024-12-05 21:21:02.174658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.094 [2024-12-05 21:21:02.174677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:54.094 [2024-12-05 21:21:02.184627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efeb58 00:27:54.094 [2024-12-05 21:21:02.185795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.094 [2024-12-05 21:21:02.185814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:54.094 [2024-12-05 21:21:02.192146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee0ea0 00:27:54.094 [2024-12-05 21:21:02.192686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.094 [2024-12-05 21:21:02.192705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.201673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eea248 00:27:54.354 [2024-12-05 21:21:02.202335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.202355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.212100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef2510 00:27:54.354 [2024-12-05 21:21:02.213588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.213606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.220613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef35f0 00:27:54.354 [2024-12-05 21:21:02.221734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.221752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.229397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eec840 00:27:54.354 [2024-12-05 21:21:02.230479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.230498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.238336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee0630 00:27:54.354 [2024-12-05 21:21:02.239415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.239433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.247198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016edfdc0 00:27:54.354 [2024-12-05 21:21:02.248245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.248263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.256048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef3e60 00:27:54.354 [2024-12-05 21:21:02.257176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.257195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.265023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef92c0 00:27:54.354 [2024-12-05 21:21:02.266079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.266097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.273860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef20d8 00:27:54.354 [2024-12-05 21:21:02.274918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.274936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.282037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef0ff8 00:27:54.354 [2024-12-05 21:21:02.283318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.283336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.290226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eebfd0 00:27:54.354 [2024-12-05 21:21:02.290929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.290947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.299213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef9b30 00:27:54.354 [2024-12-05 21:21:02.299944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.299963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.308326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef2d80 00:27:54.354 [2024-12-05 21:21:02.309115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.309134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.317663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef7da8 00:27:54.354 [2024-12-05 21:21:02.318181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.318201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.327048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee1b48 00:27:54.354 [2024-12-05 21:21:02.327708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.327728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.336171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ede470 00:27:54.354 [2024-12-05 21:21:02.337112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.337131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.345256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef2948 00:27:54.354 [2024-12-05 21:21:02.346008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.346026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.355435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eebb98 00:27:54.354 [2024-12-05 21:21:02.356998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.357016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.361774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee4578 00:27:54.354 [2024-12-05 21:21:02.362451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.362470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.371062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef6cc8 00:27:54.354 [2024-12-05 21:21:02.371865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.371884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.380322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef9b30 00:27:54.354 [2024-12-05 21:21:02.381254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.381272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.389698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef4b08 00:27:54.354 [2024-12-05 21:21:02.390731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.390749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.398667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef6020 00:27:54.354 [2024-12-05 21:21:02.399739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.399757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.407603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016edf118 00:27:54.354 [2024-12-05 21:21:02.408666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.408685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:54.354 [2024-12-05 21:21:02.416518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef6458 00:27:54.354 [2024-12-05 21:21:02.417581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.354 [2024-12-05 21:21:02.417599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:54.355 [2024-12-05 21:21:02.425405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee6738 00:27:54.355 [2024-12-05 21:21:02.426461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.355 [2024-12-05 21:21:02.426479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:54.355 [2024-12-05 21:21:02.434276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efbcf0 00:27:54.355 [2024-12-05 21:21:02.435347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.355 [2024-12-05 21:21:02.435365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:54.355 [2024-12-05 21:21:02.443131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eea248 00:27:54.355 [2024-12-05 21:21:02.444240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.355 [2024-12-05 21:21:02.444258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:54.355 [2024-12-05 21:21:02.452271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee88f8 00:27:54.355 [2024-12-05 21:21:02.453382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.355 [2024-12-05 21:21:02.453401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:54.615 [2024-12-05 21:21:02.461486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef0788 00:27:54.615 [2024-12-05 21:21:02.462579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.615 [2024-12-05 21:21:02.462598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:54.615 [2024-12-05 21:21:02.469938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef46d0 00:27:54.615 [2024-12-05 21:21:02.471014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.615 [2024-12-05 21:21:02.471036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:54.615 [2024-12-05 21:21:02.478284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef57b0 00:27:54.615 [2024-12-05 21:21:02.478977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.615 [2024-12-05 21:21:02.478996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:54.615 [2024-12-05 21:21:02.487265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef2d80 00:27:54.615 [2024-12-05 21:21:02.487778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.615 [2024-12-05 21:21:02.487797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:54.615 [2024-12-05 21:21:02.497427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef6458 00:27:54.615 [2024-12-05 21:21:02.498692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.615 [2024-12-05 21:21:02.498711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:54.615 [2024-12-05 21:21:02.506746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eed0b0 00:27:54.615 [2024-12-05 21:21:02.508161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.615 [2024-12-05 21:21:02.508179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:54.615 [2024-12-05 21:21:02.513848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eed4e8 00:27:54.615 [2024-12-05 21:21:02.514760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.615 [2024-12-05 21:21:02.514779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:54.615 [2024-12-05 21:21:02.523379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eeee38 00:27:54.616 [2024-12-05 21:21:02.524224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.524242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.531739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eef270 00:27:54.616 [2024-12-05 21:21:02.532611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.532630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.540636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef5be8 00:27:54.616 [2024-12-05 21:21:02.541490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.541508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.551619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef0350 00:27:54.616 [2024-12-05 21:21:02.553002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.553020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.560267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eddc00 00:27:54.616 [2024-12-05 21:21:02.561365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.561390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.568436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee6b70 00:27:54.616 [2024-12-05 21:21:02.569815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.569834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.576704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016edf988 00:27:54.616 [2024-12-05 21:21:02.577398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.577417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.586004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee7818 00:27:54.616 [2024-12-05 21:21:02.586943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.586961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.595266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef5be8 00:27:54.616 [2024-12-05 21:21:02.596301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.596320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.606073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efda78 00:27:54.616 [2024-12-05 21:21:02.607670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.607689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.612450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eef6a8 00:27:54.616 [2024-12-05 21:21:02.613188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.613207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.620939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef1868 00:27:54.616 [2024-12-05 21:21:02.621652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.621671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.630606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef4298 00:27:54.616 [2024-12-05 21:21:02.631162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.631182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.640384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eecc78 00:27:54.616 [2024-12-05 21:21:02.641596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.641616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.649254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee8088 00:27:54.616 [2024-12-05 21:21:02.650162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.650181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.658523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee3d08 00:27:54.616 [2024-12-05 21:21:02.659674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.659694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.665641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee7818 00:27:54.616 [2024-12-05 21:21:02.666317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.666335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.675266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee7c50 00:27:54.616 [2024-12-05 21:21:02.675764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.675784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.684551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee9e10 00:27:54.616 [2024-12-05 21:21:02.685164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.685182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.694700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef1ca0 00:27:54.616 [2024-12-05 21:21:02.696074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.696094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.703195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef81e0 00:27:54.616 [2024-12-05 21:21:02.704293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.704316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.712145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef4b08 00:27:54.616 [2024-12-05 21:21:02.713251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.616 [2024-12-05 21:21:02.713270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:54.616 [2024-12-05 21:21:02.720608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee8088 00:27:54.877 [2024-12-05 21:21:02.721929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.877 [2024-12-05 21:21:02.721948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:54.877 [2024-12-05 21:21:02.731377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef2948 00:27:54.877 [2024-12-05 21:21:02.732804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.877 [2024-12-05 21:21:02.732822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.877 [2024-12-05 21:21:02.737627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eea248 00:27:54.877 [2024-12-05 21:21:02.738285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.877 [2024-12-05 21:21:02.738303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.877 [2024-12-05 21:21:02.746635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee88f8 00:27:54.877 [2024-12-05 21:21:02.747325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.877 [2024-12-05 21:21:02.747342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:54.877 [2024-12-05 21:21:02.755530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee9e10 00:27:54.877 [2024-12-05 21:21:02.756222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.877 [2024-12-05 21:21:02.756241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:54.877 [2024-12-05 21:21:02.765523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef3e60 00:27:54.877 [2024-12-05 21:21:02.766586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.877 [2024-12-05 21:21:02.766605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:54.877 [2024-12-05 21:21:02.772665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee6300 00:27:54.877 [2024-12-05 21:21:02.773309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.877 [2024-12-05 21:21:02.773327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:54.877 [2024-12-05 21:21:02.781942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee01f8 00:27:54.877 [2024-12-05 21:21:02.782649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.877 [2024-12-05 21:21:02.782667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.877 [2024-12-05 21:21:02.791307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef5378 00:27:54.877 [2024-12-05 21:21:02.792224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.877 [2024-12-05 21:21:02.792243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:54.877 [2024-12-05 21:21:02.801469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef6890 00:27:54.877 [2024-12-05 21:21:02.802536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.802555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.811737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efac10 00:27:54.878 [2024-12-05 21:21:02.813294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.813312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.818317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef2948 00:27:54.878 [2024-12-05 21:21:02.819114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.819133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.827964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eed0b0 00:27:54.878 [2024-12-05 21:21:02.828671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.828690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.836793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eeff18 00:27:54.878 [2024-12-05 21:21:02.837489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.837507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.845741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efeb58 00:27:54.878 [2024-12-05 21:21:02.846312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.846331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.854874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee27f0 00:27:54.878 [2024-12-05 21:21:02.855681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.855700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.862976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee23b8 00:27:54.878 [2024-12-05 21:21:02.863828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.863846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.872860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee38d0 00:27:54.878 [2024-12-05 21:21:02.873897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.873916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.882136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef2510 00:27:54.878 [2024-12-05 21:21:02.883283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.883302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.889807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee5ec8 00:27:54.878 [2024-12-05 21:21:02.890276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.890294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.900681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee23b8 00:27:54.878 [2024-12-05 21:21:02.901840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.901859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.908108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016edf550 00:27:54.878 [2024-12-05 21:21:02.908911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.908929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.916438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef4f40 00:27:54.878 [2024-12-05 21:21:02.917168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.917187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.926397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efd208 00:27:54.878 [2024-12-05 21:21:02.927284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.927302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.935543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efd640 00:27:54.878 [2024-12-05 21:21:02.936544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.936565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.944894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef4f40 00:27:54.878 [2024-12-05 21:21:02.946070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.946089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.953530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee4140 00:27:54.878 [2024-12-05 21:21:02.954691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.954709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.962012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef8618 00:27:54.878 [2024-12-05 21:21:02.962798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.962817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.970851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eddc00 00:27:54.878 [2024-12-05 21:21:02.971660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.971679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:54.878 [2024-12-05 21:21:02.979892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef1868 00:27:54.878 [2024-12-05 21:21:02.980684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.878 [2024-12-05 21:21:02.980702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:02.989091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee6fa8 00:27:55.139 [2024-12-05 21:21:02.989895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:02.989914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:02.998003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efd208 00:27:55.139 [2024-12-05 21:21:02.998770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:02.998788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.006852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee3498 00:27:55.139 [2024-12-05 21:21:03.007646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:03.007665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.015794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efda78 00:27:55.139 [2024-12-05 21:21:03.016596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:03.016614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.024723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eeb328 00:27:55.139 [2024-12-05 21:21:03.025498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:03.025516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.033692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef0788 00:27:55.139 [2024-12-05 21:21:03.034494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:03.034512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.042535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eeee38 00:27:55.139 [2024-12-05 21:21:03.043298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:03.043316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.051384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ede470 00:27:55.139 [2024-12-05 21:21:03.052184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:03.052203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.061556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef4b08 00:27:55.139 [2024-12-05 21:21:03.062861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:03.062879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.068726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef8a50 00:27:55.139 [2024-12-05 21:21:03.069485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:03.069504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.078402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016eedd58 00:27:55.139 [2024-12-05 21:21:03.078966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:03.078984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.088071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efb8b8 00:27:55.139 [2024-12-05 21:21:03.089497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:03.089515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.096599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef8e88 00:27:55.139 [2024-12-05 21:21:03.097488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:03.097507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.105644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef9b30 00:27:55.139 [2024-12-05 21:21:03.106331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:03.106350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.115563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efb8b8 00:27:55.139 [2024-12-05 21:21:03.116871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:03.116890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.123861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee5ec8 00:27:55.139 [2024-12-05 21:21:03.124877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:03.124896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.132738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef7da8 00:27:55.139 [2024-12-05 21:21:03.133730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:03.133749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.141705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016edfdc0 00:27:55.139 [2024-12-05 21:21:03.142598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:03.142616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.150565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016efa3a0 00:27:55.139 [2024-12-05 21:21:03.151561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:03.151580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.159554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ee84c0 00:27:55.139 [2024-12-05 21:21:03.160549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.139 [2024-12-05 21:21:03.160567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:55.139 [2024-12-05 21:21:03.168505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ad90) with pdu=0x200016ef2510 00:27:55.140 [2024-12-05 21:21:03.169459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:55.140 [2024-12-05 21:21:03.169480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:55.140 28443.00 IOPS, 111.11 MiB/s 00:27:55.140 Latency(us) 00:27:55.140 [2024-12-05T20:21:03.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.140 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:55.140 nvme0n1 : 2.00 28456.12 111.16 0.00 0.00 4492.95 2106.51 12170.97 00:27:55.140 [2024-12-05T20:21:03.248Z] =================================================================================================================== 00:27:55.140 [2024-12-05T20:21:03.248Z] Total : 28456.12 111.16 0.00 0.00 4492.95 2106.51 12170.97 00:27:55.140 { 00:27:55.140 "results": [ 00:27:55.140 { 00:27:55.140 "job": "nvme0n1", 00:27:55.140 "core_mask": "0x2", 00:27:55.140 "workload": "randwrite", 00:27:55.140 "status": "finished", 00:27:55.140 "queue_depth": 128, 00:27:55.140 "io_size": 4096, 00:27:55.140 "runtime": 2.003576, 00:27:55.140 "iops": 28456.120456623557, 00:27:55.140 "mibps": 111.15672053368577, 00:27:55.140 "io_failed": 0, 00:27:55.140 "io_timeout": 0, 00:27:55.140 "avg_latency_us": 4492.945817885999, 00:27:55.140 "min_latency_us": 2106.5142857142855, 00:27:55.140 "max_latency_us": 12170.971428571429 00:27:55.140 } 00:27:55.140 ], 00:27:55.140 "core_count": 1 00:27:55.140 } 00:27:55.140 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:55.140 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:55.140 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:55.140 | .driver_specific 00:27:55.140 | .nvme_error 00:27:55.140 | .status_code 00:27:55.140 | .command_transient_transport_error' 00:27:55.140 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:55.399 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 223 > 0 )) 00:27:55.399 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1458605 00:27:55.399 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1458605 ']' 00:27:55.399 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1458605 00:27:55.399 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:55.399 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:55.399 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1458605 00:27:55.399 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:55.399 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:55.399 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1458605' 00:27:55.399 killing process with pid 1458605 00:27:55.399 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1458605 00:27:55.399 Received shutdown signal, test time was about 2.000000 seconds 00:27:55.399 00:27:55.399 Latency(us) 00:27:55.399 [2024-12-05T20:21:03.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.399 [2024-12-05T20:21:03.507Z] =================================================================================================================== 00:27:55.399 [2024-12-05T20:21:03.507Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:55.399 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1458605 00:27:55.658 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:55.658 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:55.658 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:55.658 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:55.658 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:55.658 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1459178 00:27:55.658 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1459178 /var/tmp/bperf.sock 00:27:55.658 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:55.658 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1459178 ']' 00:27:55.658 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:55.658 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:55.658 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:55.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:55.658 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:55.658 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:55.658 [2024-12-05 21:21:03.664577] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:27:55.658 [2024-12-05 21:21:03.664624] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459178 ] 00:27:55.658 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:55.658 Zero copy mechanism will not be used. 00:27:55.658 [2024-12-05 21:21:03.739241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.918 [2024-12-05 21:21:03.782001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.918 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.918 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:55.918 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:55.918 21:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:56.178 21:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:56.178 21:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.178 21:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:56.178 21:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.178 21:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:56.178 21:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:56.439 nvme0n1 00:27:56.439 21:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:56.439 21:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.439 21:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:56.439 21:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.439 21:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:56.439 21:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:56.439 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:56.439 Zero copy mechanism will not be used. 00:27:56.439 Running I/O for 2 seconds... 00:27:56.439 [2024-12-05 21:21:04.429063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.429156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.429185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.433788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.433866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.433889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.438170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.438229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.438249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.442473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.442547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.442567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.446774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.446845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.446864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.451020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.451074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.451092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.455468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.455521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.455539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.459749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.459820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.459838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.463962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.464042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.464061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.468212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.468279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.468298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.472354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.472441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.472460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.476529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.476593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.476611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.480879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.480958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.480976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.485203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.485259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.485278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.489296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.489353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.489378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.493572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.493625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.493643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.497875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.497943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.497961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.501982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.502033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.502051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.506411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.506477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.506495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.510589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.510650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.510669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.514798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.514870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.514889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.518889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.518954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.518973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.439 [2024-12-05 21:21:04.523095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.439 [2024-12-05 21:21:04.523198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.439 [2024-12-05 21:21:04.523216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.440 [2024-12-05 21:21:04.527437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.440 [2024-12-05 21:21:04.527509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.440 [2024-12-05 21:21:04.527528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.440 [2024-12-05 21:21:04.531497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.440 [2024-12-05 21:21:04.531568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.440 [2024-12-05 21:21:04.531589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.440 [2024-12-05 21:21:04.535652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.440 [2024-12-05 21:21:04.535723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.440 [2024-12-05 21:21:04.535741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.440 [2024-12-05 21:21:04.539998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.440 [2024-12-05 21:21:04.540053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.440 [2024-12-05 21:21:04.540071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.440 [2024-12-05 21:21:04.544327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.440 [2024-12-05 21:21:04.544409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.440 [2024-12-05 21:21:04.544428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.548590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.548655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.548674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.552910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.552970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.552989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.557162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.557222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.557240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.561300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.561380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.561398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.565549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.565618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.565637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.569659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.569729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.569748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.573827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.573893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.573911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.578046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.578111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.578128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.582244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.582335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.582354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.587028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.587083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.587101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.592256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.592343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.592362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.597699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.597755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.597772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.602669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.602753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.602771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.607357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.607425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.607443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.611991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.612084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.612103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.616893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.616980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.616998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.621311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.621388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.621406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.625611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.625683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.625717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.629945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.630019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.630038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.634247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.634313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.634331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.701 [2024-12-05 21:21:04.638510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.701 [2024-12-05 21:21:04.638626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.701 [2024-12-05 21:21:04.638644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.642799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.642867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.642885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.647246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.647324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.647346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.651648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.651720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.651738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.655959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.656030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.656048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.660283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.660342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.660360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.664473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.664541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.664559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.668755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.668808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.668826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.673185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.673242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.673260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.677576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.677636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.677654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.682179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.682249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.682267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.686468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.686548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.686566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.690947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.691002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.691021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.695249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.695317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.695335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.699707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.699776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.699795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.704095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.704158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.704177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.708531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.708597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.708616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.713095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.713187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.713205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.718262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.718344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.718363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.722584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.722642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.722659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.726802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.726889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.726907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.731130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.731201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.731219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.735344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.735426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.735444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.739602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.739665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.739683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.743821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.743895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.743913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.748160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.748226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.748244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.752379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.752461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.752479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.756664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.702 [2024-12-05 21:21:04.756729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.702 [2024-12-05 21:21:04.756747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.702 [2024-12-05 21:21:04.761383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.703 [2024-12-05 21:21:04.761435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.703 [2024-12-05 21:21:04.761453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.703 [2024-12-05 21:21:04.766004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.703 [2024-12-05 21:21:04.766056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.703 [2024-12-05 21:21:04.766074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.703 [2024-12-05 21:21:04.770334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.703 [2024-12-05 21:21:04.770400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.703 [2024-12-05 21:21:04.770419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.703 [2024-12-05 21:21:04.774744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.703 [2024-12-05 21:21:04.774893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.703 [2024-12-05 21:21:04.774911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.703 [2024-12-05 21:21:04.779537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.703 [2024-12-05 21:21:04.779629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.703 [2024-12-05 21:21:04.779647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.703 [2024-12-05 21:21:04.784365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.703 [2024-12-05 21:21:04.784444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.703 [2024-12-05 21:21:04.784463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.703 [2024-12-05 21:21:04.789835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.703 [2024-12-05 21:21:04.789889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.703 [2024-12-05 21:21:04.789907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.703 [2024-12-05 21:21:04.794686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.703 [2024-12-05 21:21:04.794749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.703 [2024-12-05 21:21:04.794767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.703 [2024-12-05 21:21:04.799110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.703 [2024-12-05 21:21:04.799170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.703 [2024-12-05 21:21:04.799188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.703 [2024-12-05 21:21:04.803725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.703 [2024-12-05 21:21:04.803783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.703 [2024-12-05 21:21:04.803805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.964 [2024-12-05 21:21:04.808503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.964 [2024-12-05 21:21:04.808575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.964 [2024-12-05 21:21:04.808594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.964 [2024-12-05 21:21:04.812845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.964 [2024-12-05 21:21:04.812901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.964 [2024-12-05 21:21:04.812920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.964 [2024-12-05 21:21:04.817072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.964 [2024-12-05 21:21:04.817137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.964 [2024-12-05 21:21:04.817155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.964 [2024-12-05 21:21:04.821441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.964 [2024-12-05 21:21:04.821501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.964 [2024-12-05 21:21:04.821519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.964 [2024-12-05 21:21:04.826234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.964 [2024-12-05 21:21:04.826287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.964 [2024-12-05 21:21:04.826305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.964 [2024-12-05 21:21:04.830569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.964 [2024-12-05 21:21:04.830645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.964 [2024-12-05 21:21:04.830663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.964 [2024-12-05 21:21:04.834911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.964 [2024-12-05 21:21:04.834970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.964 [2024-12-05 21:21:04.834987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.964 [2024-12-05 21:21:04.839110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.964 [2024-12-05 21:21:04.839196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.964 [2024-12-05 21:21:04.839215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.964 [2024-12-05 21:21:04.843395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.964 [2024-12-05 21:21:04.843474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.964 [2024-12-05 21:21:04.843493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.964 [2024-12-05 21:21:04.847652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.964 [2024-12-05 21:21:04.847727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.964 [2024-12-05 21:21:04.847745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.964 [2024-12-05 21:21:04.851889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.964 [2024-12-05 21:21:04.851961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.964 [2024-12-05 21:21:04.851979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.964 [2024-12-05 21:21:04.856130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.964 [2024-12-05 21:21:04.856210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.964 [2024-12-05 21:21:04.856228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.860347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.860436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.860454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.864626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.864700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.864719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.868871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.868936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.868954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.873429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.873488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.873506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.878106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.878174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.878193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.883182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.883251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.883269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.888304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.888464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.888482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.893676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.893751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.893769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.899109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.899183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.899201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.904328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.904461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.904479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.908970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.909052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.909071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.913379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.913436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.913454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.917633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.917697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.917716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.922009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.922084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.922106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.926451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.926613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.926633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.931529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.931594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.931611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.935776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.935839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.935857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.940209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.940276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.940295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.944464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.944621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.944639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.949440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.949503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.949521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.954472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.954562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.954580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.960604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.960768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.960786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.966897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.966986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.967004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.971913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.972078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.972096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.977050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.977162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.977180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.981844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.981933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.981951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.986674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.965 [2024-12-05 21:21:04.986743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.965 [2024-12-05 21:21:04.986761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.965 [2024-12-05 21:21:04.991481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.966 [2024-12-05 21:21:04.991551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.966 [2024-12-05 21:21:04.991569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.966 [2024-12-05 21:21:04.996914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.966 [2024-12-05 21:21:04.997005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.966 [2024-12-05 21:21:04.997023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.966 [2024-12-05 21:21:05.002476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.966 [2024-12-05 21:21:05.002553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.966 [2024-12-05 21:21:05.002571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.966 [2024-12-05 21:21:05.008111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.966 [2024-12-05 21:21:05.008184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.966 [2024-12-05 21:21:05.008202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.966 [2024-12-05 21:21:05.012990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.966 [2024-12-05 21:21:05.013094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.966 [2024-12-05 21:21:05.013112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.966 [2024-12-05 21:21:05.018011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.966 [2024-12-05 21:21:05.018069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.966 [2024-12-05 21:21:05.018087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.966 [2024-12-05 21:21:05.023135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.966 [2024-12-05 21:21:05.023186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.966 [2024-12-05 21:21:05.023204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.966 [2024-12-05 21:21:05.027986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.966 [2024-12-05 21:21:05.028055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.966 [2024-12-05 21:21:05.028073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.966 [2024-12-05 21:21:05.032551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.966 [2024-12-05 21:21:05.032637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.966 [2024-12-05 21:21:05.032655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.966 [2024-12-05 21:21:05.037213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.966 [2024-12-05 21:21:05.037292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.966 [2024-12-05 21:21:05.037310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.966 [2024-12-05 21:21:05.041859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.966 [2024-12-05 21:21:05.041939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.966 [2024-12-05 21:21:05.041957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.966 [2024-12-05 21:21:05.046619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.966 [2024-12-05 21:21:05.046703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.966 [2024-12-05 21:21:05.046721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.966 [2024-12-05 21:21:05.051399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.966 [2024-12-05 21:21:05.051464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.966 [2024-12-05 21:21:05.051486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.966 [2024-12-05 21:21:05.056330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.966 [2024-12-05 21:21:05.056419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.966 [2024-12-05 21:21:05.056437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.966 [2024-12-05 21:21:05.060762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.966 [2024-12-05 21:21:05.060861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.966 [2024-12-05 21:21:05.060879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.966 [2024-12-05 21:21:05.065282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:56.966 [2024-12-05 21:21:05.065393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.966 [2024-12-05 21:21:05.065411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.226 [2024-12-05 21:21:05.070179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.226 [2024-12-05 21:21:05.070245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.226 [2024-12-05 21:21:05.070263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.226 [2024-12-05 21:21:05.075083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.226 [2024-12-05 21:21:05.075135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.226 [2024-12-05 21:21:05.075154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.226 [2024-12-05 21:21:05.080334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.226 [2024-12-05 21:21:05.080387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.226 [2024-12-05 21:21:05.080406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.226 [2024-12-05 21:21:05.085270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.226 [2024-12-05 21:21:05.085355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.226 [2024-12-05 21:21:05.085380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.226 [2024-12-05 21:21:05.090419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.226 [2024-12-05 21:21:05.090474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.226 [2024-12-05 21:21:05.090492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.226 [2024-12-05 21:21:05.095704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.226 [2024-12-05 21:21:05.095773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.226 [2024-12-05 21:21:05.095791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.226 [2024-12-05 21:21:05.100572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.226 [2024-12-05 21:21:05.100641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.226 [2024-12-05 21:21:05.100659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.226 [2024-12-05 21:21:05.105900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.226 [2024-12-05 21:21:05.106051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.226 [2024-12-05 21:21:05.106069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.226 [2024-12-05 21:21:05.110899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.226 [2024-12-05 21:21:05.110987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.226 [2024-12-05 21:21:05.111005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.226 [2024-12-05 21:21:05.115762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.226 [2024-12-05 21:21:05.115846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.226 [2024-12-05 21:21:05.115864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.226 [2024-12-05 21:21:05.120607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.226 [2024-12-05 21:21:05.120699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.226 [2024-12-05 21:21:05.120717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.226 [2024-12-05 21:21:05.125738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.226 [2024-12-05 21:21:05.125792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.226 [2024-12-05 21:21:05.125810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.226 [2024-12-05 21:21:05.131045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.226 [2024-12-05 21:21:05.131116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.226 [2024-12-05 21:21:05.131134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.226 [2024-12-05 21:21:05.136319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.226 [2024-12-05 21:21:05.136454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.226 [2024-12-05 21:21:05.136473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.226 [2024-12-05 21:21:05.141105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.226 [2024-12-05 21:21:05.141182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.226 [2024-12-05 21:21:05.141201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.226 [2024-12-05 21:21:05.146187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.226 [2024-12-05 21:21:05.146273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.226 [2024-12-05 21:21:05.146291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.226 [2024-12-05 21:21:05.151262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.226 [2024-12-05 21:21:05.151329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.226 [2024-12-05 21:21:05.151347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.226 [2024-12-05 21:21:05.156165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.226 [2024-12-05 21:21:05.156253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.156271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.161024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.161087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.161105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.165766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.165868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.165886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.170864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.170939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.170957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.176244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.176317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.176335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.181607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.181678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.181699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.186268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.186336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.186354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.190990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.191043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.191061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.195438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.195489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.195507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.199763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.199832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.199851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.204433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.204510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.204529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.209212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.209284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.209302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.213734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.213800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.213818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.218143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.218265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.218283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.222596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.222662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.222681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.227119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.227196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.227215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.231462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.231576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.231594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.235736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.235790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.235808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.240126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.240185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.240203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.244405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.244488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.244506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.248756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.248818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.248836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.253185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.253242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.253260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.257620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.257688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.257706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.262103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.262172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.262190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.266352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.266437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.266455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.270776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.270842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.270860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.275231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.275308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.275326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.280220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.280285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.280304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.285399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.285467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.285485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.290731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.290809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.290827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.295994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.296063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.296081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.301175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.301277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.301299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.306399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.306490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.306508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.311490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.311557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.311576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.316521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.316592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.316610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.322273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.322340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.322358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.227 [2024-12-05 21:21:05.329131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.227 [2024-12-05 21:21:05.329280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.227 [2024-12-05 21:21:05.329300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.335924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.335998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.336017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.342092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.342174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.342193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.349193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.349289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.349307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.356197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.356381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.356399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.362832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.362883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.362902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.368458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.368537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.368556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.373579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.373647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.373666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.378109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.378216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.378235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.383310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.383489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.383507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.389652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.389823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.389842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.394974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.395071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.395090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.400996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.401147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.401166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.407211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.407411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.407430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.413584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.413746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.413764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.420227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.420394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.420414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.487 6502.00 IOPS, 812.75 MiB/s [2024-12-05T20:21:05.595Z] [2024-12-05 21:21:05.427296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.427497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.427518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.433423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.433590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.433610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.440066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.440221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.440240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.446567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.446724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.446745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.452945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.453094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.453113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.459331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.459506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.459528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.487 [2024-12-05 21:21:05.465854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.487 [2024-12-05 21:21:05.466003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.487 [2024-12-05 21:21:05.466022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.472300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.472393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.472412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.477656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.477740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.477758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.482216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.482283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.482301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.486726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.486829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.486847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.491309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.491387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.491406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.495783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.495842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.495860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.500270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.500328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.500347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.504736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.504805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.504824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.509167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.509233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.509252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.513594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.513656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.513674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.518261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.518323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.518343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.523028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.523108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.523128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.528155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.528210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.528229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.533540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.533595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.533613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.538407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.538468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.538486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.543145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.543225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.543243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.548020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.548074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.548093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.552895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.552959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.552978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.557462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.557537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.557555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.562166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.562292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.562311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.567059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.567136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.567155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.571550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.571615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.571634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.575933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.576004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.576022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.580376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.580449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.580467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.584895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.584950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.584972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.488 [2024-12-05 21:21:05.589329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.488 [2024-12-05 21:21:05.589397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.488 [2024-12-05 21:21:05.589416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.593777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.593844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.593863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.598256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.598310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.598328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.602711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.602777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.602796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.607163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.607239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.607257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.611658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.611752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.611771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.616384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.616467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.616486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.621060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.621126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.621144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.626388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.626468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.626492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.631384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.631542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.631560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.636138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.636241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.636259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.640745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.640805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.640823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.645219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.645277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.645296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.649771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.649834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.649852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.654208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.654262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.654280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.658705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.658768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.658786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.663178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.663232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.663251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.667649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.667703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.667721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.672146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.672206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.672225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.676681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.676736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.676754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.681191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.681244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.681262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.685666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.685728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.685746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.690144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.690196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.690214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.694632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.694699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.694718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.699106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.699159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.699177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.703602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.703669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.703687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.708073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.708126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.708144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.712563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.712625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.748 [2024-12-05 21:21:05.712643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.748 [2024-12-05 21:21:05.717038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.748 [2024-12-05 21:21:05.717093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.717112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.721783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.721893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.721912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.726884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.726934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.726953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.732115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.732167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.732185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.737769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.737825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.737843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.743078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.743136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.743154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.748532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.748585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.748607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.753270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.753325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.753343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.757907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.757970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.757988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.762503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.762580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.762598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.767236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.767309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.767326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.771874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.771941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.771960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.776569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.776682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.776700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.781463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.781531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.781549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.786168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.786222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.786241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.790813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.790892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.790911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.795533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.795649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.795667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.800230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.800283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.800301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.805313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.805365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.805390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.810722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.810779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.810796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.815811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.815883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.815902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.821303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.821404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.821422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.826585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.826674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.826692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.831986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.832078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.832096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.836988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.837055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.837074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.842195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.842267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.842286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.847530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.847589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.847607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.749 [2024-12-05 21:21:05.852688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:57.749 [2024-12-05 21:21:05.852793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.749 [2024-12-05 21:21:05.852811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.857404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.857465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.857484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.862661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.862920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.862939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.867919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.868058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.868077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.873135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.873205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.873224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.878302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.878463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.878484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.883894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.883946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.883964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.889215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.889356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.889380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.894232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.894327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.894345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.899436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.899537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.899556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.904655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.904800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.904819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.909748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.909811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.909829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.914870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.914943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.914961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.920529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.920679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.920698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.925814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.925873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.925892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.931049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.931120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.931138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.935932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.936009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.936028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.940695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.940775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.940794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.945325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.945386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.945404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.949939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.949995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.950013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.954642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.954700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.954719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.959338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.959425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.959443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.964025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.964088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.964107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.968689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.968794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.968812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.973530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.973617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.973635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.978312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.978440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.978458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.982993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.983082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.983100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.987631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.987697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.987715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.992258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.992309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.992327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:05.996999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:05.997114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:05.997133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.003191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.003319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.003337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.008728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.008801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.008823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.013804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.013879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.013897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.018498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.018600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.018618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.023131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.023212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.023230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.027840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.027942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.027959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.032259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.032317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.032335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.036872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.036937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.036954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.041513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.041568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.041586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.046529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.046579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.046597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.051770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.051824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.051842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.057077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.057219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.057237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.062431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.062509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.062527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.067321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.067388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.067406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.072087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.072157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.072175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.076700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.076777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.076795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.081246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.081360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.081384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.085999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.086070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.086089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.090604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.090670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.090689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.095625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.095677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.008 [2024-12-05 21:21:06.095695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.008 [2024-12-05 21:21:06.100592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.008 [2024-12-05 21:21:06.100680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.009 [2024-12-05 21:21:06.100698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.009 [2024-12-05 21:21:06.105072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.009 [2024-12-05 21:21:06.105169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.009 [2024-12-05 21:21:06.105188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.009 [2024-12-05 21:21:06.109540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.009 [2024-12-05 21:21:06.109597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.009 [2024-12-05 21:21:06.109616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.268 [2024-12-05 21:21:06.113974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.268 [2024-12-05 21:21:06.114033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.268 [2024-12-05 21:21:06.114051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.268 [2024-12-05 21:21:06.118425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.268 [2024-12-05 21:21:06.118483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.268 [2024-12-05 21:21:06.118502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.268 [2024-12-05 21:21:06.122891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.268 [2024-12-05 21:21:06.122958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.268 [2024-12-05 21:21:06.122977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.268 [2024-12-05 21:21:06.127334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.268 [2024-12-05 21:21:06.127395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.268 [2024-12-05 21:21:06.127414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.268 [2024-12-05 21:21:06.131785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.268 [2024-12-05 21:21:06.131851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.268 [2024-12-05 21:21:06.131873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.268 [2024-12-05 21:21:06.136203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.268 [2024-12-05 21:21:06.136260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.268 [2024-12-05 21:21:06.136279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.268 [2024-12-05 21:21:06.140884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.268 [2024-12-05 21:21:06.140983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.268 [2024-12-05 21:21:06.141001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.268 [2024-12-05 21:21:06.146570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.268 [2024-12-05 21:21:06.146742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.268 [2024-12-05 21:21:06.146759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.268 [2024-12-05 21:21:06.153095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.268 [2024-12-05 21:21:06.153275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.268 [2024-12-05 21:21:06.153293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.268 [2024-12-05 21:21:06.159584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.268 [2024-12-05 21:21:06.159700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.268 [2024-12-05 21:21:06.159718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.268 [2024-12-05 21:21:06.167259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.268 [2024-12-05 21:21:06.167392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.268 [2024-12-05 21:21:06.167411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.268 [2024-12-05 21:21:06.175416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.268 [2024-12-05 21:21:06.175560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.268 [2024-12-05 21:21:06.175579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.268 [2024-12-05 21:21:06.182972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.268 [2024-12-05 21:21:06.183110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.268 [2024-12-05 21:21:06.183129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.268 [2024-12-05 21:21:06.190229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.268 [2024-12-05 21:21:06.190358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.268 [2024-12-05 21:21:06.190382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.268 [2024-12-05 21:21:06.198151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.268 [2024-12-05 21:21:06.198309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.268 [2024-12-05 21:21:06.198328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.268 [2024-12-05 21:21:06.205408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.205538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.205556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.213237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.213374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.213392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.221248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.221406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.221424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.229030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.229156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.229175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.236230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.236387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.236407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.243275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.243349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.243373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.249746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.249846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.249864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.254764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.254826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.254844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.259604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.259671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.259690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.264087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.264143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.264162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.268616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.268696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.268715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.273133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.273194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.273213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.277546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.277635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.277653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.282106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.282181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.282200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.286581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.286699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.286717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.291038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.291107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.291128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.295465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.295604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.295623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.300608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.300778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.300796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.307277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.307450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.307468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.312632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.312733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.312751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.317777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.317930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.317948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.322914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.322986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.323005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.327843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.327911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.327930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.333762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.333933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.333952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.339791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.339885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.339907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.345209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.345375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.345393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.350276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.350393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.350411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.354951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.355020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.355038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.360438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.269 [2024-12-05 21:21:06.360606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.269 [2024-12-05 21:21:06.360623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.269 [2024-12-05 21:21:06.366125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.270 [2024-12-05 21:21:06.366217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.270 [2024-12-05 21:21:06.366235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.270 [2024-12-05 21:21:06.371513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.270 [2024-12-05 21:21:06.371578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.270 [2024-12-05 21:21:06.371596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.529 [2024-12-05 21:21:06.376531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.529 [2024-12-05 21:21:06.376606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.529 [2024-12-05 21:21:06.376625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.529 [2024-12-05 21:21:06.381703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.529 [2024-12-05 21:21:06.381845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.529 [2024-12-05 21:21:06.381863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.529 [2024-12-05 21:21:06.387121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.529 [2024-12-05 21:21:06.387187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.529 [2024-12-05 21:21:06.387205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.529 [2024-12-05 21:21:06.392219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.529 [2024-12-05 21:21:06.392289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.529 [2024-12-05 21:21:06.392307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.529 [2024-12-05 21:21:06.397782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.529 [2024-12-05 21:21:06.397873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.529 [2024-12-05 21:21:06.397892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.529 [2024-12-05 21:21:06.403035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.529 [2024-12-05 21:21:06.403104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.529 [2024-12-05 21:21:06.403123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.529 [2024-12-05 21:21:06.408479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.529 [2024-12-05 21:21:06.408557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.529 [2024-12-05 21:21:06.408576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:58.529 [2024-12-05 21:21:06.415691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.529 [2024-12-05 21:21:06.415845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.529 [2024-12-05 21:21:06.415864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.529 [2024-12-05 21:21:06.422522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.529 [2024-12-05 21:21:06.422602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.529 [2024-12-05 21:21:06.422621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:58.529 6275.00 IOPS, 784.38 MiB/s [2024-12-05T20:21:06.637Z] [2024-12-05 21:21:06.428914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2b0d0) with pdu=0x200016eff3c8 00:27:58.529 [2024-12-05 21:21:06.429014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.529 [2024-12-05 21:21:06.429033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:58.529 00:27:58.529 Latency(us) 00:27:58.529 [2024-12-05T20:21:06.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.529 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:58.529 nvme0n1 : 2.00 6270.16 783.77 0.00 0.00 2546.83 1802.24 9424.70 00:27:58.529 [2024-12-05T20:21:06.637Z] =================================================================================================================== 00:27:58.529 [2024-12-05T20:21:06.637Z] Total : 6270.16 783.77 0.00 0.00 2546.83 1802.24 9424.70 00:27:58.529 { 00:27:58.529 "results": [ 00:27:58.529 { 00:27:58.529 "job": "nvme0n1", 00:27:58.529 "core_mask": "0x2", 00:27:58.529 "workload": "randwrite", 00:27:58.529 "status": "finished", 00:27:58.530 "queue_depth": 16, 00:27:58.530 "io_size": 131072, 00:27:58.530 "runtime": 2.004097, 00:27:58.530 "iops": 6270.155586281502, 00:27:58.530 "mibps": 783.7694482851878, 00:27:58.530 "io_failed": 0, 00:27:58.530 "io_timeout": 0, 00:27:58.530 "avg_latency_us": 2546.833276793767, 00:27:58.530 "min_latency_us": 1802.24, 00:27:58.530 "max_latency_us": 9424.700952380952 00:27:58.530 } 00:27:58.530 ], 00:27:58.530 "core_count": 1 00:27:58.530 } 00:27:58.530 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:58.530 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:58.530 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:58.530 | .driver_specific 00:27:58.530 | .nvme_error 00:27:58.530 | .status_code 00:27:58.530 | .command_transient_transport_error' 00:27:58.530 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:58.790 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 406 > 0 )) 00:27:58.790 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1459178 00:27:58.790 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1459178 ']' 00:27:58.790 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1459178 00:27:58.790 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:58.790 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:58.790 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1459178 00:27:58.790 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:58.790 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:58.790 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1459178' 00:27:58.790 killing process with pid 1459178 00:27:58.790 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1459178 00:27:58.790 Received shutdown signal, test time was about 2.000000 seconds 00:27:58.790 00:27:58.790 Latency(us) 00:27:58.790 [2024-12-05T20:21:06.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.790 [2024-12-05T20:21:06.898Z] =================================================================================================================== 00:27:58.790 [2024-12-05T20:21:06.898Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:58.790 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1459178 00:27:58.790 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1457393 00:27:58.790 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1457393 ']' 00:27:58.790 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1457393 00:27:58.790 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:58.790 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:58.790 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1457393 00:27:59.051 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:59.051 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:59.051 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1457393' 00:27:59.051 killing process with pid 1457393 00:27:59.051 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1457393 00:27:59.051 21:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1457393 00:27:59.051 00:27:59.051 real 0m13.934s 00:27:59.051 user 0m26.734s 00:27:59.051 sys 0m4.509s 00:27:59.051 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:59.051 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:59.051 ************************************ 00:27:59.051 END TEST nvmf_digest_error 00:27:59.051 ************************************ 00:27:59.051 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:59.051 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:59.051 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:59.051 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:59.051 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:59.051 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:59.051 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:59.051 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:59.051 rmmod nvme_tcp 00:27:59.051 rmmod nvme_fabrics 00:27:59.312 rmmod nvme_keyring 00:27:59.312 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:59.312 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:59.312 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:59.312 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1457393 ']' 00:27:59.312 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1457393 00:27:59.312 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1457393 ']' 00:27:59.312 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1457393 00:27:59.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1457393) - No such process 00:27:59.312 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1457393 is not found' 00:27:59.312 Process with pid 1457393 is not found 00:27:59.312 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:59.312 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:59.312 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:59.312 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:59.312 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:59.312 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:59.312 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:59.312 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:59.313 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:59.313 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.313 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:59.313 21:21:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.218 21:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:01.218 00:28:01.218 real 0m36.822s 00:28:01.218 user 0m55.595s 00:28:01.218 sys 0m13.805s 00:28:01.218 21:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:01.218 21:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:01.218 ************************************ 00:28:01.218 END TEST nvmf_digest 00:28:01.218 ************************************ 00:28:01.218 21:21:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:01.218 21:21:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:01.218 21:21:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:01.218 21:21:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:01.219 21:21:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:01.219 21:21:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:01.219 21:21:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.479 ************************************ 00:28:01.479 START TEST nvmf_bdevperf 00:28:01.479 ************************************ 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:01.479 * Looking for test storage... 00:28:01.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:01.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:01.479 --rc genhtml_branch_coverage=1 00:28:01.479 --rc genhtml_function_coverage=1 00:28:01.479 --rc genhtml_legend=1 00:28:01.479 --rc geninfo_all_blocks=1 00:28:01.479 --rc geninfo_unexecuted_blocks=1 00:28:01.479 00:28:01.479 ' 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:01.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:01.479 --rc genhtml_branch_coverage=1 00:28:01.479 --rc genhtml_function_coverage=1 00:28:01.479 --rc genhtml_legend=1 00:28:01.479 --rc geninfo_all_blocks=1 00:28:01.479 --rc geninfo_unexecuted_blocks=1 00:28:01.479 00:28:01.479 ' 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:01.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:01.479 --rc genhtml_branch_coverage=1 00:28:01.479 --rc genhtml_function_coverage=1 00:28:01.479 --rc genhtml_legend=1 00:28:01.479 --rc geninfo_all_blocks=1 00:28:01.479 --rc geninfo_unexecuted_blocks=1 00:28:01.479 00:28:01.479 ' 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:01.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:01.479 --rc genhtml_branch_coverage=1 00:28:01.479 --rc genhtml_function_coverage=1 00:28:01.479 --rc genhtml_legend=1 00:28:01.479 --rc geninfo_all_blocks=1 00:28:01.479 --rc geninfo_unexecuted_blocks=1 00:28:01.479 00:28:01.479 ' 00:28:01.479 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:01.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:01.480 21:21:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:08.052 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:08.052 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:08.052 Found net devices under 0000:86:00.0: cvl_0_0 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:08.052 Found net devices under 0000:86:00.1: cvl_0_1 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:08.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:28:08.052 00:28:08.052 --- 10.0.0.2 ping statistics --- 00:28:08.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.052 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:08.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:28:08.052 00:28:08.052 --- 10.0.0.1 ping statistics --- 00:28:08.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.052 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:08.052 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1463569 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1463569 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1463569 ']' 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:08.053 [2024-12-05 21:21:15.527078] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:28:08.053 [2024-12-05 21:21:15.527131] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.053 [2024-12-05 21:21:15.606695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:08.053 [2024-12-05 21:21:15.650078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.053 [2024-12-05 21:21:15.650114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.053 [2024-12-05 21:21:15.650122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.053 [2024-12-05 21:21:15.650128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.053 [2024-12-05 21:21:15.650135] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.053 [2024-12-05 21:21:15.651511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:08.053 [2024-12-05 21:21:15.651616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.053 [2024-12-05 21:21:15.651617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:08.053 [2024-12-05 21:21:15.789160] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:08.053 Malloc0 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:08.053 [2024-12-05 21:21:15.861605] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:08.053 { 00:28:08.053 "params": { 00:28:08.053 "name": "Nvme$subsystem", 00:28:08.053 "trtype": "$TEST_TRANSPORT", 00:28:08.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.053 "adrfam": "ipv4", 00:28:08.053 "trsvcid": "$NVMF_PORT", 00:28:08.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.053 "hdgst": ${hdgst:-false}, 00:28:08.053 "ddgst": ${ddgst:-false} 00:28:08.053 }, 00:28:08.053 "method": "bdev_nvme_attach_controller" 00:28:08.053 } 00:28:08.053 EOF 00:28:08.053 )") 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:08.053 21:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:08.053 "params": { 00:28:08.053 "name": "Nvme1", 00:28:08.053 "trtype": "tcp", 00:28:08.053 "traddr": "10.0.0.2", 00:28:08.053 "adrfam": "ipv4", 00:28:08.053 "trsvcid": "4420", 00:28:08.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:08.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:08.053 "hdgst": false, 00:28:08.053 "ddgst": false 00:28:08.053 }, 00:28:08.053 "method": "bdev_nvme_attach_controller" 00:28:08.053 }' 00:28:08.053 [2024-12-05 21:21:15.913665] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:28:08.053 [2024-12-05 21:21:15.913721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463769 ] 00:28:08.053 [2024-12-05 21:21:15.991072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.053 [2024-12-05 21:21:16.032065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.311 Running I/O for 1 seconds... 00:28:09.246 11269.00 IOPS, 44.02 MiB/s 00:28:09.246 Latency(us) 00:28:09.246 [2024-12-05T20:21:17.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.246 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:09.246 Verification LBA range: start 0x0 length 0x4000 00:28:09.246 Nvme1n1 : 1.01 11290.55 44.10 0.00 0.00 11297.27 2371.78 12420.63 00:28:09.246 [2024-12-05T20:21:17.354Z] =================================================================================================================== 00:28:09.246 [2024-12-05T20:21:17.354Z] Total : 11290.55 44.10 0.00 0.00 11297.27 2371.78 12420.63 00:28:09.505 21:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1464042 00:28:09.505 21:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:09.505 21:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:09.505 21:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:09.505 21:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:09.505 21:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:09.506 21:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:09.506 21:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:09.506 { 00:28:09.506 "params": { 00:28:09.506 "name": "Nvme$subsystem", 00:28:09.506 "trtype": "$TEST_TRANSPORT", 00:28:09.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.506 "adrfam": "ipv4", 00:28:09.506 "trsvcid": "$NVMF_PORT", 00:28:09.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.506 "hdgst": ${hdgst:-false}, 00:28:09.506 "ddgst": ${ddgst:-false} 00:28:09.506 }, 00:28:09.506 "method": "bdev_nvme_attach_controller" 00:28:09.506 } 00:28:09.506 EOF 00:28:09.506 )") 00:28:09.506 21:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:09.506 21:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:09.506 21:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:09.506 21:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:09.506 "params": { 00:28:09.506 "name": "Nvme1", 00:28:09.506 "trtype": "tcp", 00:28:09.506 "traddr": "10.0.0.2", 00:28:09.506 "adrfam": "ipv4", 00:28:09.506 "trsvcid": "4420", 00:28:09.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:09.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:09.506 "hdgst": false, 00:28:09.506 "ddgst": false 00:28:09.506 }, 00:28:09.506 "method": "bdev_nvme_attach_controller" 00:28:09.506 }' 00:28:09.506 [2024-12-05 21:21:17.533841] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:28:09.506 [2024-12-05 21:21:17.533889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464042 ] 00:28:09.506 [2024-12-05 21:21:17.608871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.765 [2024-12-05 21:21:17.647385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.765 Running I/O for 15 seconds... 00:28:11.710 11305.00 IOPS, 44.16 MiB/s [2024-12-05T20:21:20.758Z] 11296.00 IOPS, 44.12 MiB/s [2024-12-05T20:21:20.758Z] 21:21:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1463569 00:28:12.650 21:21:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:12.651 [2024-12-05 21:21:20.501846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.501885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.501903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.501928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.501939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.501946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.501956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.501968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.501978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.501985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.501994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.502002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.502018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.502035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.651 [2024-12-05 21:21:20.502051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.651 [2024-12-05 21:21:20.502067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.651 [2024-12-05 21:21:20.502082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.651 [2024-12-05 21:21:20.502099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.651 [2024-12-05 21:21:20.502116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.651 [2024-12-05 21:21:20.502133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.651 [2024-12-05 21:21:20.502150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.651 [2024-12-05 21:21:20.502168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.651 [2024-12-05 21:21:20.502187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.651 [2024-12-05 21:21:20.502208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.502227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.502246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.502264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.502283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.502298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.502313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.502329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.502344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.502359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.502501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.502517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.502536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-05 21:21:20.502551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.651 [2024-12-05 21:21:20.502559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.652 [2024-12-05 21:21:20.502726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.502990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.502998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.503005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.503013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.503020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.503028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.503035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.652 [2024-12-05 21:21:20.503044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-05 21:21:20.503050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.653 [2024-12-05 21:21:20.503538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.653 [2024-12-05 21:21:20.503547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.654 [2024-12-05 21:21:20.503617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.654 [2024-12-05 21:21:20.503633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.654 [2024-12-05 21:21:20.503648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.654 [2024-12-05 21:21:20.503663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.654 [2024-12-05 21:21:20.503678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.654 [2024-12-05 21:21:20.503693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.654 [2024-12-05 21:21:20.503709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.503986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.503995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.504002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.504010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.504017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.654 [2024-12-05 21:21:20.504025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.654 [2024-12-05 21:21:20.504032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.655 [2024-12-05 21:21:20.504040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.655 [2024-12-05 21:21:20.504047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.655 [2024-12-05 21:21:20.504057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.655 [2024-12-05 21:21:20.504064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.655 [2024-12-05 21:21:20.504072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79c410 is same with the state(6) to be set 00:28:12.655 [2024-12-05 21:21:20.504081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.655 [2024-12-05 21:21:20.504087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.655 [2024-12-05 21:21:20.504093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112096 len:8 PRP1 0x0 PRP2 0x0 00:28:12.655 [2024-12-05 21:21:20.504101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.655 [2024-12-05 21:21:20.507082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.655 [2024-12-05 21:21:20.507137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.655 [2024-12-05 21:21:20.507639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-12-05 21:21:20.507657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.655 [2024-12-05 21:21:20.507666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.655 [2024-12-05 21:21:20.507851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.655 [2024-12-05 21:21:20.508043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.655 [2024-12-05 21:21:20.508050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.655 [2024-12-05 21:21:20.508058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.655 [2024-12-05 21:21:20.508066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.655 [2024-12-05 21:21:20.520308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.655 [2024-12-05 21:21:20.520751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-12-05 21:21:20.520769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.655 [2024-12-05 21:21:20.520778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.655 [2024-12-05 21:21:20.520951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.655 [2024-12-05 21:21:20.521125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.655 [2024-12-05 21:21:20.521133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.655 [2024-12-05 21:21:20.521140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.655 [2024-12-05 21:21:20.521147] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.655 [2024-12-05 21:21:20.533383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.655 [2024-12-05 21:21:20.533766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-12-05 21:21:20.533782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.655 [2024-12-05 21:21:20.533789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.655 [2024-12-05 21:21:20.533962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.655 [2024-12-05 21:21:20.534136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.655 [2024-12-05 21:21:20.534144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.655 [2024-12-05 21:21:20.534151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.655 [2024-12-05 21:21:20.534158] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.655 [2024-12-05 21:21:20.546425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.655 [2024-12-05 21:21:20.546779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-12-05 21:21:20.546798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.655 [2024-12-05 21:21:20.546806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.655 [2024-12-05 21:21:20.546974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.655 [2024-12-05 21:21:20.547142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.655 [2024-12-05 21:21:20.547150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.655 [2024-12-05 21:21:20.547156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.655 [2024-12-05 21:21:20.547162] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.655 [2024-12-05 21:21:20.559420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.655 [2024-12-05 21:21:20.559837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-12-05 21:21:20.559883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.655 [2024-12-05 21:21:20.559906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.655 [2024-12-05 21:21:20.560419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.655 [2024-12-05 21:21:20.560594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.655 [2024-12-05 21:21:20.560610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.655 [2024-12-05 21:21:20.560617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.655 [2024-12-05 21:21:20.560623] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.655 [2024-12-05 21:21:20.572250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.655 [2024-12-05 21:21:20.572698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-12-05 21:21:20.572743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.655 [2024-12-05 21:21:20.572766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.655 [2024-12-05 21:21:20.573350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.655 [2024-12-05 21:21:20.573570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.655 [2024-12-05 21:21:20.573579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.655 [2024-12-05 21:21:20.573585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.655 [2024-12-05 21:21:20.573591] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.655 [2024-12-05 21:21:20.585092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.655 [2024-12-05 21:21:20.585519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-12-05 21:21:20.585535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.655 [2024-12-05 21:21:20.585542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.655 [2024-12-05 21:21:20.585704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.656 [2024-12-05 21:21:20.585862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.656 [2024-12-05 21:21:20.585870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.656 [2024-12-05 21:21:20.585876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.656 [2024-12-05 21:21:20.585882] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.656 [2024-12-05 21:21:20.597942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.656 [2024-12-05 21:21:20.598371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-12-05 21:21:20.598387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.656 [2024-12-05 21:21:20.598410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.656 [2024-12-05 21:21:20.598578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.656 [2024-12-05 21:21:20.598746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.656 [2024-12-05 21:21:20.598754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.656 [2024-12-05 21:21:20.598760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.656 [2024-12-05 21:21:20.598766] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.656 [2024-12-05 21:21:20.610710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.656 [2024-12-05 21:21:20.611130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-12-05 21:21:20.611146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.656 [2024-12-05 21:21:20.611152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.656 [2024-12-05 21:21:20.611312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.656 [2024-12-05 21:21:20.611498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.656 [2024-12-05 21:21:20.611507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.656 [2024-12-05 21:21:20.611514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.656 [2024-12-05 21:21:20.611519] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.656 [2024-12-05 21:21:20.623586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.656 [2024-12-05 21:21:20.623982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-12-05 21:21:20.623998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.656 [2024-12-05 21:21:20.624005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.656 [2024-12-05 21:21:20.624164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.656 [2024-12-05 21:21:20.624323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.656 [2024-12-05 21:21:20.624334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.656 [2024-12-05 21:21:20.624340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.656 [2024-12-05 21:21:20.624346] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.656 [2024-12-05 21:21:20.636558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.656 [2024-12-05 21:21:20.636899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-12-05 21:21:20.636915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.656 [2024-12-05 21:21:20.636922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.656 [2024-12-05 21:21:20.637081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.656 [2024-12-05 21:21:20.637239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.656 [2024-12-05 21:21:20.637247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.656 [2024-12-05 21:21:20.637253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.656 [2024-12-05 21:21:20.637259] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.656 [2024-12-05 21:21:20.649499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.656 [2024-12-05 21:21:20.649945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-12-05 21:21:20.649961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.656 [2024-12-05 21:21:20.649968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.656 [2024-12-05 21:21:20.650151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.656 [2024-12-05 21:21:20.650321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.656 [2024-12-05 21:21:20.650329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.656 [2024-12-05 21:21:20.650335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.656 [2024-12-05 21:21:20.650341] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.656 [2024-12-05 21:21:20.662309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.656 [2024-12-05 21:21:20.662744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-12-05 21:21:20.662788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.656 [2024-12-05 21:21:20.662811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.656 [2024-12-05 21:21:20.663415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.656 [2024-12-05 21:21:20.663906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.656 [2024-12-05 21:21:20.663915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.657 [2024-12-05 21:21:20.663921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.657 [2024-12-05 21:21:20.663927] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.657 [2024-12-05 21:21:20.675113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.657 [2024-12-05 21:21:20.675541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-12-05 21:21:20.675557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.657 [2024-12-05 21:21:20.675564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.657 [2024-12-05 21:21:20.675723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.657 [2024-12-05 21:21:20.675882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.657 [2024-12-05 21:21:20.675890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.657 [2024-12-05 21:21:20.675896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.657 [2024-12-05 21:21:20.675901] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.657 [2024-12-05 21:21:20.687961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.657 [2024-12-05 21:21:20.688374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-12-05 21:21:20.688390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.657 [2024-12-05 21:21:20.688413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.657 [2024-12-05 21:21:20.688581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.657 [2024-12-05 21:21:20.688749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.657 [2024-12-05 21:21:20.688757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.657 [2024-12-05 21:21:20.688764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.657 [2024-12-05 21:21:20.688770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.657 [2024-12-05 21:21:20.700834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.657 [2024-12-05 21:21:20.701228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-12-05 21:21:20.701243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.657 [2024-12-05 21:21:20.701250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.657 [2024-12-05 21:21:20.701434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.657 [2024-12-05 21:21:20.701603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.657 [2024-12-05 21:21:20.701611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.657 [2024-12-05 21:21:20.701618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.657 [2024-12-05 21:21:20.701624] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.657 [2024-12-05 21:21:20.713701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.657 [2024-12-05 21:21:20.714092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-12-05 21:21:20.714144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.657 [2024-12-05 21:21:20.714168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.657 [2024-12-05 21:21:20.714734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.657 [2024-12-05 21:21:20.714905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.657 [2024-12-05 21:21:20.714913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.657 [2024-12-05 21:21:20.714919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.657 [2024-12-05 21:21:20.714925] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.657 [2024-12-05 21:21:20.726560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.657 [2024-12-05 21:21:20.726889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-12-05 21:21:20.726904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.657 [2024-12-05 21:21:20.726911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.657 [2024-12-05 21:21:20.727070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.657 [2024-12-05 21:21:20.727229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.657 [2024-12-05 21:21:20.727236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.657 [2024-12-05 21:21:20.727242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.657 [2024-12-05 21:21:20.727248] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.657 [2024-12-05 21:21:20.739339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.657 [2024-12-05 21:21:20.739770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-12-05 21:21:20.739786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.657 [2024-12-05 21:21:20.739792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.657 [2024-12-05 21:21:20.739952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.657 [2024-12-05 21:21:20.740112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.657 [2024-12-05 21:21:20.740119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.657 [2024-12-05 21:21:20.740125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.657 [2024-12-05 21:21:20.740131] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.657 [2024-12-05 21:21:20.752391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.657 [2024-12-05 21:21:20.752825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-12-05 21:21:20.752841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.657 [2024-12-05 21:21:20.752848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.657 [2024-12-05 21:21:20.753021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.657 [2024-12-05 21:21:20.753196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.657 [2024-12-05 21:21:20.753204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.657 [2024-12-05 21:21:20.753211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.657 [2024-12-05 21:21:20.753217] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.918 [2024-12-05 21:21:20.765292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.918 [2024-12-05 21:21:20.765748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.918 [2024-12-05 21:21:20.765764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.918 [2024-12-05 21:21:20.765771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.918 [2024-12-05 21:21:20.765944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.918 [2024-12-05 21:21:20.766117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.918 [2024-12-05 21:21:20.766125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.918 [2024-12-05 21:21:20.766132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.918 [2024-12-05 21:21:20.766138] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.918 [2024-12-05 21:21:20.778410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.918 [2024-12-05 21:21:20.778750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.918 [2024-12-05 21:21:20.778766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.918 [2024-12-05 21:21:20.778775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.918 [2024-12-05 21:21:20.778950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.918 [2024-12-05 21:21:20.779123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.918 [2024-12-05 21:21:20.779132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.918 [2024-12-05 21:21:20.779138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.918 [2024-12-05 21:21:20.779144] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.918 [2024-12-05 21:21:20.791326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.918 [2024-12-05 21:21:20.791660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.918 [2024-12-05 21:21:20.791703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.918 [2024-12-05 21:21:20.791727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.918 [2024-12-05 21:21:20.792213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.918 [2024-12-05 21:21:20.792391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.918 [2024-12-05 21:21:20.792401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.918 [2024-12-05 21:21:20.792411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.918 [2024-12-05 21:21:20.792418] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.918 [2024-12-05 21:21:20.804313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.918 [2024-12-05 21:21:20.804693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.918 [2024-12-05 21:21:20.804709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.918 [2024-12-05 21:21:20.804716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.918 [2024-12-05 21:21:20.804883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.918 [2024-12-05 21:21:20.805052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.918 [2024-12-05 21:21:20.805060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.918 [2024-12-05 21:21:20.805066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.918 [2024-12-05 21:21:20.805072] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.918 [2024-12-05 21:21:20.817256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.919 [2024-12-05 21:21:20.817626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.919 [2024-12-05 21:21:20.817643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.919 [2024-12-05 21:21:20.817650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.919 [2024-12-05 21:21:20.817829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.919 [2024-12-05 21:21:20.817997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.919 [2024-12-05 21:21:20.818006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.919 [2024-12-05 21:21:20.818012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.919 [2024-12-05 21:21:20.818018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.919 10095.67 IOPS, 39.44 MiB/s [2024-12-05T20:21:21.027Z] [2024-12-05 21:21:20.830273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.919 [2024-12-05 21:21:20.830635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.919 [2024-12-05 21:21:20.830691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.919 [2024-12-05 21:21:20.830714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.919 [2024-12-05 21:21:20.831248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.919 [2024-12-05 21:21:20.831427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.919 [2024-12-05 21:21:20.831435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.919 [2024-12-05 21:21:20.831443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.919 [2024-12-05 21:21:20.831449] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.919 [2024-12-05 21:21:20.843334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.919 [2024-12-05 21:21:20.843788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.919 [2024-12-05 21:21:20.843832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.919 [2024-12-05 21:21:20.843855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.919 [2024-12-05 21:21:20.844308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.919 [2024-12-05 21:21:20.844486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.919 [2024-12-05 21:21:20.844494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.919 [2024-12-05 21:21:20.844501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.919 [2024-12-05 21:21:20.844507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.919 [2024-12-05 21:21:20.856289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.919 [2024-12-05 21:21:20.856720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.919 [2024-12-05 21:21:20.856771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.919 [2024-12-05 21:21:20.856794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.919 [2024-12-05 21:21:20.857312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.919 [2024-12-05 21:21:20.857488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.919 [2024-12-05 21:21:20.857497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.919 [2024-12-05 21:21:20.857504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.919 [2024-12-05 21:21:20.857510] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.919 [2024-12-05 21:21:20.869132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.919 [2024-12-05 21:21:20.869462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.919 [2024-12-05 21:21:20.869479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.919 [2024-12-05 21:21:20.869486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.919 [2024-12-05 21:21:20.869653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.919 [2024-12-05 21:21:20.869822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.919 [2024-12-05 21:21:20.869830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.919 [2024-12-05 21:21:20.869836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.919 [2024-12-05 21:21:20.869842] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.919 [2024-12-05 21:21:20.881948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.919 [2024-12-05 21:21:20.882358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.919 [2024-12-05 21:21:20.882428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.919 [2024-12-05 21:21:20.882452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.919 [2024-12-05 21:21:20.883035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.919 [2024-12-05 21:21:20.883450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.919 [2024-12-05 21:21:20.883459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.919 [2024-12-05 21:21:20.883465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.919 [2024-12-05 21:21:20.883470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.919 [2024-12-05 21:21:20.894695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.919 [2024-12-05 21:21:20.895117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.919 [2024-12-05 21:21:20.895132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.919 [2024-12-05 21:21:20.895139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.919 [2024-12-05 21:21:20.895297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.919 [2024-12-05 21:21:20.895463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.919 [2024-12-05 21:21:20.895472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.919 [2024-12-05 21:21:20.895478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.919 [2024-12-05 21:21:20.895484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.919 [2024-12-05 21:21:20.907518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.919 [2024-12-05 21:21:20.907886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.919 [2024-12-05 21:21:20.907902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.919 [2024-12-05 21:21:20.907910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.919 [2024-12-05 21:21:20.908078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.919 [2024-12-05 21:21:20.908247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.920 [2024-12-05 21:21:20.908255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.920 [2024-12-05 21:21:20.908261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.920 [2024-12-05 21:21:20.908267] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.920 [2024-12-05 21:21:20.920352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.920 [2024-12-05 21:21:20.920701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.920 [2024-12-05 21:21:20.920717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.920 [2024-12-05 21:21:20.920724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.920 [2024-12-05 21:21:20.920892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.920 [2024-12-05 21:21:20.921063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.920 [2024-12-05 21:21:20.921071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.920 [2024-12-05 21:21:20.921077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.920 [2024-12-05 21:21:20.921083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.920 [2024-12-05 21:21:20.933130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.920 [2024-12-05 21:21:20.933528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.920 [2024-12-05 21:21:20.933543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.920 [2024-12-05 21:21:20.933550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.920 [2024-12-05 21:21:20.933709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.920 [2024-12-05 21:21:20.933868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.920 [2024-12-05 21:21:20.933876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.920 [2024-12-05 21:21:20.933882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.920 [2024-12-05 21:21:20.933888] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.920 [2024-12-05 21:21:20.945957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.920 [2024-12-05 21:21:20.946377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.920 [2024-12-05 21:21:20.946392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.920 [2024-12-05 21:21:20.946414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.920 [2024-12-05 21:21:20.946582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.920 [2024-12-05 21:21:20.946750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.920 [2024-12-05 21:21:20.946758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.920 [2024-12-05 21:21:20.946764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.920 [2024-12-05 21:21:20.946771] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.920 [2024-12-05 21:21:20.958788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.920 [2024-12-05 21:21:20.959246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.920 [2024-12-05 21:21:20.959289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.920 [2024-12-05 21:21:20.959313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.920 [2024-12-05 21:21:20.959923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.920 [2024-12-05 21:21:20.960527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.920 [2024-12-05 21:21:20.960554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.920 [2024-12-05 21:21:20.960589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.920 [2024-12-05 21:21:20.960595] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.920 [2024-12-05 21:21:20.971578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.920 [2024-12-05 21:21:20.971974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.920 [2024-12-05 21:21:20.971990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.920 [2024-12-05 21:21:20.971996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.920 [2024-12-05 21:21:20.972155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.920 [2024-12-05 21:21:20.972314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.920 [2024-12-05 21:21:20.972322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.920 [2024-12-05 21:21:20.972328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.920 [2024-12-05 21:21:20.972334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.920 [2024-12-05 21:21:20.984414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.920 [2024-12-05 21:21:20.984806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.920 [2024-12-05 21:21:20.984821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.920 [2024-12-05 21:21:20.984828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.920 [2024-12-05 21:21:20.984986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.920 [2024-12-05 21:21:20.985145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.920 [2024-12-05 21:21:20.985153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.920 [2024-12-05 21:21:20.985159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.920 [2024-12-05 21:21:20.985164] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.920 [2024-12-05 21:21:20.997236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.920 [2024-12-05 21:21:20.997599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.920 [2024-12-05 21:21:20.997616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.920 [2024-12-05 21:21:20.997623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.920 [2024-12-05 21:21:20.997790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.920 [2024-12-05 21:21:20.997957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.920 [2024-12-05 21:21:20.997965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.920 [2024-12-05 21:21:20.997972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.920 [2024-12-05 21:21:20.997978] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.920 [2024-12-05 21:21:21.010066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.920 [2024-12-05 21:21:21.010505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.920 [2024-12-05 21:21:21.010521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.920 [2024-12-05 21:21:21.010529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:12.920 [2024-12-05 21:21:21.010697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:12.921 [2024-12-05 21:21:21.010865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.921 [2024-12-05 21:21:21.010873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.921 [2024-12-05 21:21:21.010880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.921 [2024-12-05 21:21:21.010886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.921 [2024-12-05 21:21:21.023171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.921 [2024-12-05 21:21:21.023609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.921 [2024-12-05 21:21:21.023625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:12.921 [2024-12-05 21:21:21.023632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.180 [2024-12-05 21:21:21.023805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.180 [2024-12-05 21:21:21.023979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.180 [2024-12-05 21:21:21.023987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.180 [2024-12-05 21:21:21.023993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.180 [2024-12-05 21:21:21.024000] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.180 [2024-12-05 21:21:21.036076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.180 [2024-12-05 21:21:21.036514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.180 [2024-12-05 21:21:21.036560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.180 [2024-12-05 21:21:21.036584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.180 [2024-12-05 21:21:21.036992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.180 [2024-12-05 21:21:21.037161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.180 [2024-12-05 21:21:21.037169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.180 [2024-12-05 21:21:21.037175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.180 [2024-12-05 21:21:21.037181] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.180 [2024-12-05 21:21:21.048821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.180 [2024-12-05 21:21:21.049241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.180 [2024-12-05 21:21:21.049285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.180 [2024-12-05 21:21:21.049316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.180 [2024-12-05 21:21:21.049804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.180 [2024-12-05 21:21:21.049973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.180 [2024-12-05 21:21:21.049981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.180 [2024-12-05 21:21:21.049987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.180 [2024-12-05 21:21:21.049993] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.180 [2024-12-05 21:21:21.061659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.180 [2024-12-05 21:21:21.062048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.180 [2024-12-05 21:21:21.062064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.180 [2024-12-05 21:21:21.062070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.180 [2024-12-05 21:21:21.062228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.180 [2024-12-05 21:21:21.062410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.180 [2024-12-05 21:21:21.062419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.180 [2024-12-05 21:21:21.062426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.180 [2024-12-05 21:21:21.062432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.180 [2024-12-05 21:21:21.074549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.180 [2024-12-05 21:21:21.074957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.180 [2024-12-05 21:21:21.075001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.180 [2024-12-05 21:21:21.075024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.180 [2024-12-05 21:21:21.075627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.180 [2024-12-05 21:21:21.076035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.180 [2024-12-05 21:21:21.076043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.180 [2024-12-05 21:21:21.076049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.180 [2024-12-05 21:21:21.076056] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.180 [2024-12-05 21:21:21.087360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.180 [2024-12-05 21:21:21.087785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.180 [2024-12-05 21:21:21.087800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.180 [2024-12-05 21:21:21.087807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.180 [2024-12-05 21:21:21.087966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.180 [2024-12-05 21:21:21.088128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.181 [2024-12-05 21:21:21.088136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.181 [2024-12-05 21:21:21.088142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.181 [2024-12-05 21:21:21.088148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.181 [2024-12-05 21:21:21.100215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.181 [2024-12-05 21:21:21.100636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.181 [2024-12-05 21:21:21.100652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.181 [2024-12-05 21:21:21.100659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.181 [2024-12-05 21:21:21.100827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.181 [2024-12-05 21:21:21.100994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.181 [2024-12-05 21:21:21.101002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.181 [2024-12-05 21:21:21.101009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.181 [2024-12-05 21:21:21.101015] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.181 [2024-12-05 21:21:21.112965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.181 [2024-12-05 21:21:21.113381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.181 [2024-12-05 21:21:21.113397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.181 [2024-12-05 21:21:21.113404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.181 [2024-12-05 21:21:21.113563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.181 [2024-12-05 21:21:21.113722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.181 [2024-12-05 21:21:21.113730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.181 [2024-12-05 21:21:21.113736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.181 [2024-12-05 21:21:21.113741] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.181 [2024-12-05 21:21:21.125794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.181 [2024-12-05 21:21:21.126209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.181 [2024-12-05 21:21:21.126225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.181 [2024-12-05 21:21:21.126232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.181 [2024-12-05 21:21:21.126413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.181 [2024-12-05 21:21:21.126586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.181 [2024-12-05 21:21:21.126594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.181 [2024-12-05 21:21:21.126604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.181 [2024-12-05 21:21:21.126610] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.181 [2024-12-05 21:21:21.138561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.181 [2024-12-05 21:21:21.138912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.181 [2024-12-05 21:21:21.138955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.181 [2024-12-05 21:21:21.138978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.181 [2024-12-05 21:21:21.139577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.181 [2024-12-05 21:21:21.140021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.181 [2024-12-05 21:21:21.140029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.181 [2024-12-05 21:21:21.140035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.181 [2024-12-05 21:21:21.140041] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.181 [2024-12-05 21:21:21.151472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.181 [2024-12-05 21:21:21.151898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.181 [2024-12-05 21:21:21.151942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.181 [2024-12-05 21:21:21.151965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.181 [2024-12-05 21:21:21.152569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.181 [2024-12-05 21:21:21.153067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.181 [2024-12-05 21:21:21.153075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.181 [2024-12-05 21:21:21.153081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.181 [2024-12-05 21:21:21.153087] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.181 [2024-12-05 21:21:21.166640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.181 [2024-12-05 21:21:21.167154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.181 [2024-12-05 21:21:21.167178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.181 [2024-12-05 21:21:21.167188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.181 [2024-12-05 21:21:21.167451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.181 [2024-12-05 21:21:21.167708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.181 [2024-12-05 21:21:21.167720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.181 [2024-12-05 21:21:21.167730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.181 [2024-12-05 21:21:21.167738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.181 [2024-12-05 21:21:21.179641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.181 [2024-12-05 21:21:21.180052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.181 [2024-12-05 21:21:21.180069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.181 [2024-12-05 21:21:21.180077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.181 [2024-12-05 21:21:21.180249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.181 [2024-12-05 21:21:21.180433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.181 [2024-12-05 21:21:21.180443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.181 [2024-12-05 21:21:21.180450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.181 [2024-12-05 21:21:21.180456] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.181 [2024-12-05 21:21:21.192520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.181 [2024-12-05 21:21:21.192904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.181 [2024-12-05 21:21:21.192947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.181 [2024-12-05 21:21:21.192971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.181 [2024-12-05 21:21:21.193568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.181 [2024-12-05 21:21:21.193986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.181 [2024-12-05 21:21:21.194004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.181 [2024-12-05 21:21:21.194019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.181 [2024-12-05 21:21:21.194032] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.181 [2024-12-05 21:21:21.207341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.181 [2024-12-05 21:21:21.207805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.181 [2024-12-05 21:21:21.207827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.181 [2024-12-05 21:21:21.207837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.181 [2024-12-05 21:21:21.208092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.181 [2024-12-05 21:21:21.208347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.181 [2024-12-05 21:21:21.208359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.181 [2024-12-05 21:21:21.208376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.181 [2024-12-05 21:21:21.208386] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.181 [2024-12-05 21:21:21.220400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.181 [2024-12-05 21:21:21.220808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.181 [2024-12-05 21:21:21.220824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.181 [2024-12-05 21:21:21.220834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.181 [2024-12-05 21:21:21.221007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.181 [2024-12-05 21:21:21.221181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.182 [2024-12-05 21:21:21.221189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.182 [2024-12-05 21:21:21.221196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.182 [2024-12-05 21:21:21.221202] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.182 [2024-12-05 21:21:21.233321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.182 [2024-12-05 21:21:21.233739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.182 [2024-12-05 21:21:21.233755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.182 [2024-12-05 21:21:21.233762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.182 [2024-12-05 21:21:21.233930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.182 [2024-12-05 21:21:21.234099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.182 [2024-12-05 21:21:21.234107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.182 [2024-12-05 21:21:21.234113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.182 [2024-12-05 21:21:21.234119] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.182 [2024-12-05 21:21:21.246162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.182 [2024-12-05 21:21:21.246602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.182 [2024-12-05 21:21:21.246618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.182 [2024-12-05 21:21:21.246624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.182 [2024-12-05 21:21:21.246783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.182 [2024-12-05 21:21:21.246942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.182 [2024-12-05 21:21:21.246950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.182 [2024-12-05 21:21:21.246956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.182 [2024-12-05 21:21:21.246962] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.182 [2024-12-05 21:21:21.258955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.182 [2024-12-05 21:21:21.259389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.182 [2024-12-05 21:21:21.259440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.182 [2024-12-05 21:21:21.259464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.182 [2024-12-05 21:21:21.259987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.182 [2024-12-05 21:21:21.260163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.182 [2024-12-05 21:21:21.260170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.182 [2024-12-05 21:21:21.260176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.182 [2024-12-05 21:21:21.260182] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.182 [2024-12-05 21:21:21.272135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.182 [2024-12-05 21:21:21.272568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.182 [2024-12-05 21:21:21.272584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.182 [2024-12-05 21:21:21.272592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.182 [2024-12-05 21:21:21.272765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.182 [2024-12-05 21:21:21.272938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.182 [2024-12-05 21:21:21.272947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.182 [2024-12-05 21:21:21.272954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.182 [2024-12-05 21:21:21.272961] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.182 [2024-12-05 21:21:21.285169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.182 [2024-12-05 21:21:21.285590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.182 [2024-12-05 21:21:21.285606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.182 [2024-12-05 21:21:21.285613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.182 [2024-12-05 21:21:21.285785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.182 [2024-12-05 21:21:21.285957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.182 [2024-12-05 21:21:21.285965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.182 [2024-12-05 21:21:21.285971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.182 [2024-12-05 21:21:21.285978] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.441 [2024-12-05 21:21:21.298059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.441 [2024-12-05 21:21:21.298493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-12-05 21:21:21.298509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.441 [2024-12-05 21:21:21.298516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.441 [2024-12-05 21:21:21.298684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.441 [2024-12-05 21:21:21.298853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.441 [2024-12-05 21:21:21.298861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.441 [2024-12-05 21:21:21.298871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.441 [2024-12-05 21:21:21.298878] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.441 [2024-12-05 21:21:21.311033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.441 [2024-12-05 21:21:21.311448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-12-05 21:21:21.311464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.441 [2024-12-05 21:21:21.311471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.441 [2024-12-05 21:21:21.311640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.441 [2024-12-05 21:21:21.311807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.441 [2024-12-05 21:21:21.311817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.441 [2024-12-05 21:21:21.311824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.441 [2024-12-05 21:21:21.311831] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.441 [2024-12-05 21:21:21.324085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.441 [2024-12-05 21:21:21.324535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-12-05 21:21:21.324580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.441 [2024-12-05 21:21:21.324603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.441 [2024-12-05 21:21:21.325187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.441 [2024-12-05 21:21:21.325634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.441 [2024-12-05 21:21:21.325652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.441 [2024-12-05 21:21:21.325667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.441 [2024-12-05 21:21:21.325680] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.441 [2024-12-05 21:21:21.338970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.441 [2024-12-05 21:21:21.339459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-12-05 21:21:21.339480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.441 [2024-12-05 21:21:21.339491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.441 [2024-12-05 21:21:21.339745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.441 [2024-12-05 21:21:21.340000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.441 [2024-12-05 21:21:21.340011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.441 [2024-12-05 21:21:21.340021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.441 [2024-12-05 21:21:21.340030] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.441 [2024-12-05 21:21:21.352103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.441 [2024-12-05 21:21:21.352479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-12-05 21:21:21.352525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.441 [2024-12-05 21:21:21.352548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.441 [2024-12-05 21:21:21.353131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.441 [2024-12-05 21:21:21.353571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.441 [2024-12-05 21:21:21.353581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.441 [2024-12-05 21:21:21.353587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.441 [2024-12-05 21:21:21.353594] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.441 [2024-12-05 21:21:21.364994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.441 [2024-12-05 21:21:21.365453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-12-05 21:21:21.365470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.441 [2024-12-05 21:21:21.365477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.441 [2024-12-05 21:21:21.365645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.441 [2024-12-05 21:21:21.365814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.441 [2024-12-05 21:21:21.365822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.441 [2024-12-05 21:21:21.365828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.441 [2024-12-05 21:21:21.365834] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.442 [2024-12-05 21:21:21.377789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.442 [2024-12-05 21:21:21.378154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-12-05 21:21:21.378169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.442 [2024-12-05 21:21:21.378176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.442 [2024-12-05 21:21:21.378344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.442 [2024-12-05 21:21:21.378519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.442 [2024-12-05 21:21:21.378528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.442 [2024-12-05 21:21:21.378534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.442 [2024-12-05 21:21:21.378540] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.442 [2024-12-05 21:21:21.390646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.442 [2024-12-05 21:21:21.391095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-12-05 21:21:21.391111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.442 [2024-12-05 21:21:21.391121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.442 [2024-12-05 21:21:21.391289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.442 [2024-12-05 21:21:21.391468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.442 [2024-12-05 21:21:21.391477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.442 [2024-12-05 21:21:21.391483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.442 [2024-12-05 21:21:21.391489] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.442 [2024-12-05 21:21:21.403425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.442 [2024-12-05 21:21:21.403771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-12-05 21:21:21.403787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.442 [2024-12-05 21:21:21.403795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.442 [2024-12-05 21:21:21.403963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.442 [2024-12-05 21:21:21.404132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.442 [2024-12-05 21:21:21.404140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.442 [2024-12-05 21:21:21.404146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.442 [2024-12-05 21:21:21.404153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.442 [2024-12-05 21:21:21.416209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.442 [2024-12-05 21:21:21.416620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-12-05 21:21:21.416637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.442 [2024-12-05 21:21:21.416644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.442 [2024-12-05 21:21:21.416813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.442 [2024-12-05 21:21:21.416982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.442 [2024-12-05 21:21:21.416990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.442 [2024-12-05 21:21:21.416996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.442 [2024-12-05 21:21:21.417002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.442 [2024-12-05 21:21:21.428958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.442 [2024-12-05 21:21:21.429401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-12-05 21:21:21.429445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.442 [2024-12-05 21:21:21.429468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.442 [2024-12-05 21:21:21.430052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.442 [2024-12-05 21:21:21.430659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.442 [2024-12-05 21:21:21.430697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.442 [2024-12-05 21:21:21.430703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.442 [2024-12-05 21:21:21.430710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.442 [2024-12-05 21:21:21.441762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.442 [2024-12-05 21:21:21.442233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-12-05 21:21:21.442277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.442 [2024-12-05 21:21:21.442299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.442 [2024-12-05 21:21:21.442833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.442 [2024-12-05 21:21:21.443002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.442 [2024-12-05 21:21:21.443011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.442 [2024-12-05 21:21:21.443017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.442 [2024-12-05 21:21:21.443023] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.442 [2024-12-05 21:21:21.454626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.442 [2024-12-05 21:21:21.455015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-12-05 21:21:21.455030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.442 [2024-12-05 21:21:21.455037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.442 [2024-12-05 21:21:21.455205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.442 [2024-12-05 21:21:21.455380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.442 [2024-12-05 21:21:21.455389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.442 [2024-12-05 21:21:21.455395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.442 [2024-12-05 21:21:21.455401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.442 [2024-12-05 21:21:21.467438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.442 [2024-12-05 21:21:21.467841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-12-05 21:21:21.467884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.442 [2024-12-05 21:21:21.467908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.442 [2024-12-05 21:21:21.468510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.442 [2024-12-05 21:21:21.468929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.442 [2024-12-05 21:21:21.468937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.442 [2024-12-05 21:21:21.468943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.442 [2024-12-05 21:21:21.468953] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.442 [2024-12-05 21:21:21.480291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.442 [2024-12-05 21:21:21.480599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-12-05 21:21:21.480615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.442 [2024-12-05 21:21:21.480622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.442 [2024-12-05 21:21:21.480790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.442 [2024-12-05 21:21:21.480959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.442 [2024-12-05 21:21:21.480967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.442 [2024-12-05 21:21:21.480973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.442 [2024-12-05 21:21:21.480979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.442 [2024-12-05 21:21:21.493094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.442 [2024-12-05 21:21:21.493511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-12-05 21:21:21.493528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.443 [2024-12-05 21:21:21.493535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.443 [2024-12-05 21:21:21.493704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.443 [2024-12-05 21:21:21.493873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.443 [2024-12-05 21:21:21.493881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.443 [2024-12-05 21:21:21.493887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.443 [2024-12-05 21:21:21.493893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.443 [2024-12-05 21:21:21.505938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.443 [2024-12-05 21:21:21.506351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-12-05 21:21:21.506373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.443 [2024-12-05 21:21:21.506384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.443 [2024-12-05 21:21:21.506552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.443 [2024-12-05 21:21:21.506721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.443 [2024-12-05 21:21:21.506729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.443 [2024-12-05 21:21:21.506736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.443 [2024-12-05 21:21:21.506744] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.443 [2024-12-05 21:21:21.518969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.443 [2024-12-05 21:21:21.519421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-12-05 21:21:21.519438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.443 [2024-12-05 21:21:21.519445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.443 [2024-12-05 21:21:21.519618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.443 [2024-12-05 21:21:21.519792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.443 [2024-12-05 21:21:21.519801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.443 [2024-12-05 21:21:21.519807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.443 [2024-12-05 21:21:21.519813] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.443 [2024-12-05 21:21:21.532003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.443 [2024-12-05 21:21:21.532404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-12-05 21:21:21.532421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.443 [2024-12-05 21:21:21.532429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.443 [2024-12-05 21:21:21.532602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.443 [2024-12-05 21:21:21.532779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.443 [2024-12-05 21:21:21.532788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.443 [2024-12-05 21:21:21.532794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.443 [2024-12-05 21:21:21.532801] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.443 [2024-12-05 21:21:21.545020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.443 [2024-12-05 21:21:21.545317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-12-05 21:21:21.545333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.443 [2024-12-05 21:21:21.545340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.443 [2024-12-05 21:21:21.545520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.443 [2024-12-05 21:21:21.545694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.443 [2024-12-05 21:21:21.545703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.443 [2024-12-05 21:21:21.545709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.443 [2024-12-05 21:21:21.545715] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.703 [2024-12-05 21:21:21.557991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.703 [2024-12-05 21:21:21.558468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.703 [2024-12-05 21:21:21.558514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.703 [2024-12-05 21:21:21.558537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.703 [2024-12-05 21:21:21.559071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.703 [2024-12-05 21:21:21.559240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.703 [2024-12-05 21:21:21.559248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.703 [2024-12-05 21:21:21.559254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.703 [2024-12-05 21:21:21.559260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.703 [2024-12-05 21:21:21.570833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.703 [2024-12-05 21:21:21.571209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.703 [2024-12-05 21:21:21.571225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.703 [2024-12-05 21:21:21.571232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.703 [2024-12-05 21:21:21.571405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.703 [2024-12-05 21:21:21.571573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.703 [2024-12-05 21:21:21.571581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.703 [2024-12-05 21:21:21.571587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.703 [2024-12-05 21:21:21.571593] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.703 [2024-12-05 21:21:21.583697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.703 [2024-12-05 21:21:21.584067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.703 [2024-12-05 21:21:21.584110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.703 [2024-12-05 21:21:21.584133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.703 [2024-12-05 21:21:21.584732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.703 [2024-12-05 21:21:21.585177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.703 [2024-12-05 21:21:21.585185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.703 [2024-12-05 21:21:21.585192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.703 [2024-12-05 21:21:21.585198] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.703 [2024-12-05 21:21:21.596553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.703 [2024-12-05 21:21:21.596903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.703 [2024-12-05 21:21:21.596919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.703 [2024-12-05 21:21:21.596926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.703 [2024-12-05 21:21:21.597093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.703 [2024-12-05 21:21:21.597264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.704 [2024-12-05 21:21:21.597275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.704 [2024-12-05 21:21:21.597282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.704 [2024-12-05 21:21:21.597288] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.704 [2024-12-05 21:21:21.609344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.704 [2024-12-05 21:21:21.609745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-12-05 21:21:21.609761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.704 [2024-12-05 21:21:21.609768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.704 [2024-12-05 21:21:21.609936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.704 [2024-12-05 21:21:21.610104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.704 [2024-12-05 21:21:21.610112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.704 [2024-12-05 21:21:21.610118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.704 [2024-12-05 21:21:21.610124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.704 [2024-12-05 21:21:21.622099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.704 [2024-12-05 21:21:21.622518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-12-05 21:21:21.622535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.704 [2024-12-05 21:21:21.622542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.704 [2024-12-05 21:21:21.622711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.704 [2024-12-05 21:21:21.622879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.704 [2024-12-05 21:21:21.622888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.704 [2024-12-05 21:21:21.622895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.704 [2024-12-05 21:21:21.622901] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.704 [2024-12-05 21:21:21.634850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.704 [2024-12-05 21:21:21.635325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-12-05 21:21:21.635381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.704 [2024-12-05 21:21:21.635406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.704 [2024-12-05 21:21:21.635990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.704 [2024-12-05 21:21:21.636160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.704 [2024-12-05 21:21:21.636168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.704 [2024-12-05 21:21:21.636175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.704 [2024-12-05 21:21:21.636184] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.704 [2024-12-05 21:21:21.647643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.704 [2024-12-05 21:21:21.647987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-12-05 21:21:21.648004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.704 [2024-12-05 21:21:21.648013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.704 [2024-12-05 21:21:21.648183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.704 [2024-12-05 21:21:21.648355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.704 [2024-12-05 21:21:21.648364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.704 [2024-12-05 21:21:21.648377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.704 [2024-12-05 21:21:21.648383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.704 [2024-12-05 21:21:21.660546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.704 [2024-12-05 21:21:21.660900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-12-05 21:21:21.660916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.704 [2024-12-05 21:21:21.660923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.704 [2024-12-05 21:21:21.661096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.704 [2024-12-05 21:21:21.661276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.704 [2024-12-05 21:21:21.661284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.704 [2024-12-05 21:21:21.661290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.704 [2024-12-05 21:21:21.661297] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.704 [2024-12-05 21:21:21.673411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.704 [2024-12-05 21:21:21.673755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-12-05 21:21:21.673771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.704 [2024-12-05 21:21:21.673778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.704 [2024-12-05 21:21:21.673945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.704 [2024-12-05 21:21:21.674115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.704 [2024-12-05 21:21:21.674123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.704 [2024-12-05 21:21:21.674129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.704 [2024-12-05 21:21:21.674135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.704 [2024-12-05 21:21:21.686238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.704 [2024-12-05 21:21:21.686615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-12-05 21:21:21.686632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.704 [2024-12-05 21:21:21.686639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.704 [2024-12-05 21:21:21.686807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.704 [2024-12-05 21:21:21.686974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.704 [2024-12-05 21:21:21.686982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.704 [2024-12-05 21:21:21.686989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.704 [2024-12-05 21:21:21.686994] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.704 [2024-12-05 21:21:21.699098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.704 [2024-12-05 21:21:21.699537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-12-05 21:21:21.699581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.704 [2024-12-05 21:21:21.699603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.704 [2024-12-05 21:21:21.700186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.704 [2024-12-05 21:21:21.700378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.704 [2024-12-05 21:21:21.700386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.704 [2024-12-05 21:21:21.700408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.704 [2024-12-05 21:21:21.700414] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.704 [2024-12-05 21:21:21.711908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.704 [2024-12-05 21:21:21.712306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.704 [2024-12-05 21:21:21.712321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.704 [2024-12-05 21:21:21.712328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.704 [2024-12-05 21:21:21.712533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.704 [2024-12-05 21:21:21.712706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.705 [2024-12-05 21:21:21.712715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.705 [2024-12-05 21:21:21.712722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.705 [2024-12-05 21:21:21.712729] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.705 [2024-12-05 21:21:21.724699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.705 [2024-12-05 21:21:21.725111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-12-05 21:21:21.725155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.705 [2024-12-05 21:21:21.725179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.705 [2024-12-05 21:21:21.725790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.705 [2024-12-05 21:21:21.726401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.705 [2024-12-05 21:21:21.726410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.705 [2024-12-05 21:21:21.726416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.705 [2024-12-05 21:21:21.726422] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.705 [2024-12-05 21:21:21.737441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.705 [2024-12-05 21:21:21.737873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-12-05 21:21:21.737888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.705 [2024-12-05 21:21:21.737895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.705 [2024-12-05 21:21:21.738054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.705 [2024-12-05 21:21:21.738213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.705 [2024-12-05 21:21:21.738221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.705 [2024-12-05 21:21:21.738227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.705 [2024-12-05 21:21:21.738232] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.705 [2024-12-05 21:21:21.750187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.705 [2024-12-05 21:21:21.750563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-12-05 21:21:21.750578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.705 [2024-12-05 21:21:21.750585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.705 [2024-12-05 21:21:21.750743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.705 [2024-12-05 21:21:21.750903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.705 [2024-12-05 21:21:21.750911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.705 [2024-12-05 21:21:21.750916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.705 [2024-12-05 21:21:21.750922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.705 [2024-12-05 21:21:21.762981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.705 [2024-12-05 21:21:21.763374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-12-05 21:21:21.763390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.705 [2024-12-05 21:21:21.763397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.705 [2024-12-05 21:21:21.763556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.705 [2024-12-05 21:21:21.763714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.705 [2024-12-05 21:21:21.763725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.705 [2024-12-05 21:21:21.763731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.705 [2024-12-05 21:21:21.763737] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.705 [2024-12-05 21:21:21.775793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.705 [2024-12-05 21:21:21.776221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-12-05 21:21:21.776264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.705 [2024-12-05 21:21:21.776287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.705 [2024-12-05 21:21:21.776749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.705 [2024-12-05 21:21:21.776924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.705 [2024-12-05 21:21:21.776933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.705 [2024-12-05 21:21:21.776940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.705 [2024-12-05 21:21:21.776946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.705 [2024-12-05 21:21:21.788831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.705 [2024-12-05 21:21:21.789242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-12-05 21:21:21.789259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.705 [2024-12-05 21:21:21.789266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.705 [2024-12-05 21:21:21.789449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.705 [2024-12-05 21:21:21.789624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.705 [2024-12-05 21:21:21.789632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.705 [2024-12-05 21:21:21.789638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.705 [2024-12-05 21:21:21.789644] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.705 [2024-12-05 21:21:21.801872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.705 [2024-12-05 21:21:21.802271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.705 [2024-12-05 21:21:21.802287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.705 [2024-12-05 21:21:21.802294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.705 [2024-12-05 21:21:21.802471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.705 [2024-12-05 21:21:21.802641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.705 [2024-12-05 21:21:21.802649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.705 [2024-12-05 21:21:21.802655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.705 [2024-12-05 21:21:21.802664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.965 [2024-12-05 21:21:21.814858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.965 [2024-12-05 21:21:21.815266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.965 [2024-12-05 21:21:21.815284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.965 [2024-12-05 21:21:21.815291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.965 [2024-12-05 21:21:21.815470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.965 [2024-12-05 21:21:21.815644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.965 [2024-12-05 21:21:21.815653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.965 [2024-12-05 21:21:21.815660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.966 [2024-12-05 21:21:21.815666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.966 7571.75 IOPS, 29.58 MiB/s [2024-12-05T20:21:22.074Z] [2024-12-05 21:21:21.827696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.966 [2024-12-05 21:21:21.828109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.966 [2024-12-05 21:21:21.828125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.966 [2024-12-05 21:21:21.828132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.966 [2024-12-05 21:21:21.828301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.966 [2024-12-05 21:21:21.828476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.966 [2024-12-05 21:21:21.828485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.966 [2024-12-05 21:21:21.828492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.966 [2024-12-05 21:21:21.828498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.966 [2024-12-05 21:21:21.840580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.966 [2024-12-05 21:21:21.840907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.966 [2024-12-05 21:21:21.840922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.966 [2024-12-05 21:21:21.840929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.966 [2024-12-05 21:21:21.841097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.966 [2024-12-05 21:21:21.841265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.966 [2024-12-05 21:21:21.841273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.966 [2024-12-05 21:21:21.841280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.966 [2024-12-05 21:21:21.841285] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.966 [2024-12-05 21:21:21.853475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.966 [2024-12-05 21:21:21.853838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.966 [2024-12-05 21:21:21.853854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.966 [2024-12-05 21:21:21.853861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.966 [2024-12-05 21:21:21.854028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.966 [2024-12-05 21:21:21.854196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.966 [2024-12-05 21:21:21.854204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.966 [2024-12-05 21:21:21.854210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.966 [2024-12-05 21:21:21.854216] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.966 [2024-12-05 21:21:21.866248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.966 [2024-12-05 21:21:21.866665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.966 [2024-12-05 21:21:21.866682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.966 [2024-12-05 21:21:21.866689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.966 [2024-12-05 21:21:21.866856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.966 [2024-12-05 21:21:21.867028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.966 [2024-12-05 21:21:21.867036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.966 [2024-12-05 21:21:21.867043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.966 [2024-12-05 21:21:21.867049] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.966 [2024-12-05 21:21:21.878980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.966 [2024-12-05 21:21:21.879398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.966 [2024-12-05 21:21:21.879414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.966 [2024-12-05 21:21:21.879422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.966 [2024-12-05 21:21:21.879589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.966 [2024-12-05 21:21:21.879758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.966 [2024-12-05 21:21:21.879766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.966 [2024-12-05 21:21:21.879772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.966 [2024-12-05 21:21:21.879778] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.966 [2024-12-05 21:21:21.891848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.966 [2024-12-05 21:21:21.892285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.966 [2024-12-05 21:21:21.892323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.966 [2024-12-05 21:21:21.892347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.966 [2024-12-05 21:21:21.892935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.966 [2024-12-05 21:21:21.893104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.966 [2024-12-05 21:21:21.893112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.966 [2024-12-05 21:21:21.893119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.966 [2024-12-05 21:21:21.893125] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.966 [2024-12-05 21:21:21.904612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.966 [2024-12-05 21:21:21.905028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.966 [2024-12-05 21:21:21.905045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.966 [2024-12-05 21:21:21.905053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.966 [2024-12-05 21:21:21.905221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.966 [2024-12-05 21:21:21.905400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.966 [2024-12-05 21:21:21.905409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.966 [2024-12-05 21:21:21.905416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.966 [2024-12-05 21:21:21.905422] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.966 [2024-12-05 21:21:21.917378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.966 [2024-12-05 21:21:21.917793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.966 [2024-12-05 21:21:21.917838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.966 [2024-12-05 21:21:21.917862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.966 [2024-12-05 21:21:21.918464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.966 [2024-12-05 21:21:21.918947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.966 [2024-12-05 21:21:21.918955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.966 [2024-12-05 21:21:21.918961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.966 [2024-12-05 21:21:21.918967] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.966 [2024-12-05 21:21:21.930178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.966 [2024-12-05 21:21:21.930523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.966 [2024-12-05 21:21:21.930539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.966 [2024-12-05 21:21:21.930546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.966 [2024-12-05 21:21:21.930715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.966 [2024-12-05 21:21:21.930883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.966 [2024-12-05 21:21:21.930894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.966 [2024-12-05 21:21:21.930900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.966 [2024-12-05 21:21:21.930906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.966 [2024-12-05 21:21:21.942992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.967 [2024-12-05 21:21:21.943396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.967 [2024-12-05 21:21:21.943442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.967 [2024-12-05 21:21:21.943466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.967 [2024-12-05 21:21:21.943947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.967 [2024-12-05 21:21:21.944116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.967 [2024-12-05 21:21:21.944124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.967 [2024-12-05 21:21:21.944130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.967 [2024-12-05 21:21:21.944136] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.967 [2024-12-05 21:21:21.955778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.967 [2024-12-05 21:21:21.956171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.967 [2024-12-05 21:21:21.956187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.967 [2024-12-05 21:21:21.956193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.967 [2024-12-05 21:21:21.956353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.967 [2024-12-05 21:21:21.956542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.967 [2024-12-05 21:21:21.956552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.967 [2024-12-05 21:21:21.956558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.967 [2024-12-05 21:21:21.956564] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.967 [2024-12-05 21:21:21.968595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.967 [2024-12-05 21:21:21.969012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.967 [2024-12-05 21:21:21.969028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.967 [2024-12-05 21:21:21.969035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.967 [2024-12-05 21:21:21.969203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.967 [2024-12-05 21:21:21.969379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.967 [2024-12-05 21:21:21.969389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.967 [2024-12-05 21:21:21.969395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.967 [2024-12-05 21:21:21.969405] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.967 [2024-12-05 21:21:21.981325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.967 [2024-12-05 21:21:21.981725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.967 [2024-12-05 21:21:21.981740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.967 [2024-12-05 21:21:21.981747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.967 [2024-12-05 21:21:21.981906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.967 [2024-12-05 21:21:21.982065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.967 [2024-12-05 21:21:21.982072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.967 [2024-12-05 21:21:21.982078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.967 [2024-12-05 21:21:21.982084] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.967 [2024-12-05 21:21:21.994063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.967 [2024-12-05 21:21:21.994463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.967 [2024-12-05 21:21:21.994509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.967 [2024-12-05 21:21:21.994532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.967 [2024-12-05 21:21:21.995046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.967 [2024-12-05 21:21:21.995205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.967 [2024-12-05 21:21:21.995213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.967 [2024-12-05 21:21:21.995219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.967 [2024-12-05 21:21:21.995224] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.967 [2024-12-05 21:21:22.006852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.967 [2024-12-05 21:21:22.007263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.967 [2024-12-05 21:21:22.007279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.967 [2024-12-05 21:21:22.007286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.967 [2024-12-05 21:21:22.007461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.967 [2024-12-05 21:21:22.007630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.967 [2024-12-05 21:21:22.007638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.967 [2024-12-05 21:21:22.007644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.967 [2024-12-05 21:21:22.007650] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.967 [2024-12-05 21:21:22.019722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.967 [2024-12-05 21:21:22.020131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.967 [2024-12-05 21:21:22.020184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.967 [2024-12-05 21:21:22.020208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.967 [2024-12-05 21:21:22.020811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.967 [2024-12-05 21:21:22.021243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.967 [2024-12-05 21:21:22.021251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.967 [2024-12-05 21:21:22.021257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.967 [2024-12-05 21:21:22.021263] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.967 [2024-12-05 21:21:22.032722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.967 [2024-12-05 21:21:22.033157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.967 [2024-12-05 21:21:22.033174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.967 [2024-12-05 21:21:22.033181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.967 [2024-12-05 21:21:22.033354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.967 [2024-12-05 21:21:22.033536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.967 [2024-12-05 21:21:22.033545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.967 [2024-12-05 21:21:22.033552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.967 [2024-12-05 21:21:22.033558] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.967 [2024-12-05 21:21:22.045767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.967 [2024-12-05 21:21:22.046177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.967 [2024-12-05 21:21:22.046193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.967 [2024-12-05 21:21:22.046200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.967 [2024-12-05 21:21:22.046377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.967 [2024-12-05 21:21:22.046551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.967 [2024-12-05 21:21:22.046559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.967 [2024-12-05 21:21:22.046565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.967 [2024-12-05 21:21:22.046572] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.967 [2024-12-05 21:21:22.058556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.967 [2024-12-05 21:21:22.058984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.967 [2024-12-05 21:21:22.059026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:13.967 [2024-12-05 21:21:22.059049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:13.967 [2024-12-05 21:21:22.059656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:13.967 [2024-12-05 21:21:22.060178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.967 [2024-12-05 21:21:22.060186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.967 [2024-12-05 21:21:22.060192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.968 [2024-12-05 21:21:22.060199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.227 [2024-12-05 21:21:22.071549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.227 [2024-12-05 21:21:22.071955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.227 [2024-12-05 21:21:22.071972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.227 [2024-12-05 21:21:22.071979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.227 [2024-12-05 21:21:22.072151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.227 [2024-12-05 21:21:22.072324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.227 [2024-12-05 21:21:22.072333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.227 [2024-12-05 21:21:22.072339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.227 [2024-12-05 21:21:22.072345] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.227 [2024-12-05 21:21:22.084382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.227 [2024-12-05 21:21:22.084766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.227 [2024-12-05 21:21:22.084782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.227 [2024-12-05 21:21:22.084788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.227 [2024-12-05 21:21:22.084947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.227 [2024-12-05 21:21:22.085106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.227 [2024-12-05 21:21:22.085114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.227 [2024-12-05 21:21:22.085120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.227 [2024-12-05 21:21:22.085125] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.227 [2024-12-05 21:21:22.097275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.227 [2024-12-05 21:21:22.097691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.227 [2024-12-05 21:21:22.097707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.227 [2024-12-05 21:21:22.097714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.227 [2024-12-05 21:21:22.097882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.227 [2024-12-05 21:21:22.098050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.227 [2024-12-05 21:21:22.098061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.227 [2024-12-05 21:21:22.098067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.227 [2024-12-05 21:21:22.098073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.227 [2024-12-05 21:21:22.110144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.227 [2024-12-05 21:21:22.110556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.227 [2024-12-05 21:21:22.110573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.227 [2024-12-05 21:21:22.110580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.227 [2024-12-05 21:21:22.110749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.227 [2024-12-05 21:21:22.110917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.227 [2024-12-05 21:21:22.110924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.227 [2024-12-05 21:21:22.110931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.227 [2024-12-05 21:21:22.110936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.227 [2024-12-05 21:21:22.123050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.227 [2024-12-05 21:21:22.123442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.227 [2024-12-05 21:21:22.123458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.227 [2024-12-05 21:21:22.123464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.227 [2024-12-05 21:21:22.123623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.227 [2024-12-05 21:21:22.123782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.227 [2024-12-05 21:21:22.123790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.227 [2024-12-05 21:21:22.123796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.227 [2024-12-05 21:21:22.123802] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.227 [2024-12-05 21:21:22.135855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.228 [2024-12-05 21:21:22.136269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.228 [2024-12-05 21:21:22.136284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.228 [2024-12-05 21:21:22.136291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.228 [2024-12-05 21:21:22.136469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.228 [2024-12-05 21:21:22.136638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.228 [2024-12-05 21:21:22.136646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.228 [2024-12-05 21:21:22.136652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.228 [2024-12-05 21:21:22.136658] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.228 [2024-12-05 21:21:22.148606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.228 [2024-12-05 21:21:22.149022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.228 [2024-12-05 21:21:22.149037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.228 [2024-12-05 21:21:22.149044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.228 [2024-12-05 21:21:22.149212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.228 [2024-12-05 21:21:22.149389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.228 [2024-12-05 21:21:22.149399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.228 [2024-12-05 21:21:22.149405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.228 [2024-12-05 21:21:22.149411] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.228 [2024-12-05 21:21:22.161436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.228 [2024-12-05 21:21:22.161792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.228 [2024-12-05 21:21:22.161809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.228 [2024-12-05 21:21:22.161816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.228 [2024-12-05 21:21:22.161984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.228 [2024-12-05 21:21:22.162153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.228 [2024-12-05 21:21:22.162161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.228 [2024-12-05 21:21:22.162168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.228 [2024-12-05 21:21:22.162174] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.228 [2024-12-05 21:21:22.174286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.228 [2024-12-05 21:21:22.174714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.228 [2024-12-05 21:21:22.174730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.228 [2024-12-05 21:21:22.174737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.228 [2024-12-05 21:21:22.174906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.228 [2024-12-05 21:21:22.175079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.228 [2024-12-05 21:21:22.175088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.228 [2024-12-05 21:21:22.175096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.228 [2024-12-05 21:21:22.175103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.228 [2024-12-05 21:21:22.187051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.228 [2024-12-05 21:21:22.187508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.228 [2024-12-05 21:21:22.187561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.228 [2024-12-05 21:21:22.187584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.228 [2024-12-05 21:21:22.188168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.228 [2024-12-05 21:21:22.188426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.228 [2024-12-05 21:21:22.188434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.228 [2024-12-05 21:21:22.188440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.228 [2024-12-05 21:21:22.188446] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.228 [2024-12-05 21:21:22.199925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.228 [2024-12-05 21:21:22.200345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.228 [2024-12-05 21:21:22.200360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.228 [2024-12-05 21:21:22.200372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.228 [2024-12-05 21:21:22.200540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.228 [2024-12-05 21:21:22.200708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.228 [2024-12-05 21:21:22.200716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.228 [2024-12-05 21:21:22.200722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.228 [2024-12-05 21:21:22.200728] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.228 [2024-12-05 21:21:22.212894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.228 [2024-12-05 21:21:22.213246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.228 [2024-12-05 21:21:22.213263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.228 [2024-12-05 21:21:22.213270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.228 [2024-12-05 21:21:22.213446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.228 [2024-12-05 21:21:22.213615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.228 [2024-12-05 21:21:22.213623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.228 [2024-12-05 21:21:22.213630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.228 [2024-12-05 21:21:22.213636] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.228 [2024-12-05 21:21:22.225810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.228 [2024-12-05 21:21:22.226259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.228 [2024-12-05 21:21:22.226274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.228 [2024-12-05 21:21:22.226281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.228 [2024-12-05 21:21:22.226461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.228 [2024-12-05 21:21:22.226630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.228 [2024-12-05 21:21:22.226638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.228 [2024-12-05 21:21:22.226644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.228 [2024-12-05 21:21:22.226650] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.228 [2024-12-05 21:21:22.238574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.228 [2024-12-05 21:21:22.239016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.228 [2024-12-05 21:21:22.239032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.228 [2024-12-05 21:21:22.239039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.228 [2024-12-05 21:21:22.239207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.228 [2024-12-05 21:21:22.239382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.228 [2024-12-05 21:21:22.239391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.228 [2024-12-05 21:21:22.239398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.228 [2024-12-05 21:21:22.239404] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.228 [2024-12-05 21:21:22.251397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.228 [2024-12-05 21:21:22.251720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.228 [2024-12-05 21:21:22.251736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.228 [2024-12-05 21:21:22.251743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.228 [2024-12-05 21:21:22.251910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.228 [2024-12-05 21:21:22.252078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.228 [2024-12-05 21:21:22.252086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.228 [2024-12-05 21:21:22.252093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.228 [2024-12-05 21:21:22.252099] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.229 [2024-12-05 21:21:22.264280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.229 [2024-12-05 21:21:22.264634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.229 [2024-12-05 21:21:22.264650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.229 [2024-12-05 21:21:22.264658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.229 [2024-12-05 21:21:22.264826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.229 [2024-12-05 21:21:22.264995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.229 [2024-12-05 21:21:22.265003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.229 [2024-12-05 21:21:22.265013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.229 [2024-12-05 21:21:22.265020] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.229 [2024-12-05 21:21:22.277115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.229 [2024-12-05 21:21:22.277508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.229 [2024-12-05 21:21:22.277525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.229 [2024-12-05 21:21:22.277533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.229 [2024-12-05 21:21:22.277701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.229 [2024-12-05 21:21:22.277870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.229 [2024-12-05 21:21:22.277878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.229 [2024-12-05 21:21:22.277885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.229 [2024-12-05 21:21:22.277891] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.229 [2024-12-05 21:21:22.289934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.229 [2024-12-05 21:21:22.290359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.229 [2024-12-05 21:21:22.290380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.229 [2024-12-05 21:21:22.290387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.229 [2024-12-05 21:21:22.290560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.229 [2024-12-05 21:21:22.290733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.229 [2024-12-05 21:21:22.290741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.229 [2024-12-05 21:21:22.290748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.229 [2024-12-05 21:21:22.290754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.229 [2024-12-05 21:21:22.302987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.229 [2024-12-05 21:21:22.303423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.229 [2024-12-05 21:21:22.303440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.229 [2024-12-05 21:21:22.303447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.229 [2024-12-05 21:21:22.303620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.229 [2024-12-05 21:21:22.303793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.229 [2024-12-05 21:21:22.303802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.229 [2024-12-05 21:21:22.303808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.229 [2024-12-05 21:21:22.303815] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.229 [2024-12-05 21:21:22.315989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.229 [2024-12-05 21:21:22.316394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.229 [2024-12-05 21:21:22.316409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.229 [2024-12-05 21:21:22.316417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.229 [2024-12-05 21:21:22.316584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.229 [2024-12-05 21:21:22.316757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.229 [2024-12-05 21:21:22.316765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.229 [2024-12-05 21:21:22.316771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.229 [2024-12-05 21:21:22.316777] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.229 [2024-12-05 21:21:22.328948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.229 [2024-12-05 21:21:22.329382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.229 [2024-12-05 21:21:22.329398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.229 [2024-12-05 21:21:22.329405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.229 [2024-12-05 21:21:22.329578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.229 [2024-12-05 21:21:22.329751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.229 [2024-12-05 21:21:22.329759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.229 [2024-12-05 21:21:22.329765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.229 [2024-12-05 21:21:22.329771] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.493 [2024-12-05 21:21:22.341911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.493 [2024-12-05 21:21:22.342247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.493 [2024-12-05 21:21:22.342263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.493 [2024-12-05 21:21:22.342270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.493 [2024-12-05 21:21:22.342443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.493 [2024-12-05 21:21:22.342612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.493 [2024-12-05 21:21:22.342620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.493 [2024-12-05 21:21:22.342626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.493 [2024-12-05 21:21:22.342632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.493 [2024-12-05 21:21:22.354948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.493 [2024-12-05 21:21:22.355300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.493 [2024-12-05 21:21:22.355316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.493 [2024-12-05 21:21:22.355326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.493 [2024-12-05 21:21:22.355498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.493 [2024-12-05 21:21:22.355667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.493 [2024-12-05 21:21:22.355675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.493 [2024-12-05 21:21:22.355681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.493 [2024-12-05 21:21:22.355687] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.493 [2024-12-05 21:21:22.367817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.493 [2024-12-05 21:21:22.368252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.493 [2024-12-05 21:21:22.368267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.493 [2024-12-05 21:21:22.368275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.493 [2024-12-05 21:21:22.368451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.493 [2024-12-05 21:21:22.368620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.493 [2024-12-05 21:21:22.368628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.493 [2024-12-05 21:21:22.368635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.493 [2024-12-05 21:21:22.368641] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.493 [2024-12-05 21:21:22.380567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.493 [2024-12-05 21:21:22.380964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.493 [2024-12-05 21:21:22.380981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.493 [2024-12-05 21:21:22.380988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.493 [2024-12-05 21:21:22.381156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.493 [2024-12-05 21:21:22.381327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.493 [2024-12-05 21:21:22.381336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.493 [2024-12-05 21:21:22.381342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.493 [2024-12-05 21:21:22.381348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.493 [2024-12-05 21:21:22.393419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.493 [2024-12-05 21:21:22.393849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.493 [2024-12-05 21:21:22.393893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.493 [2024-12-05 21:21:22.393916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.493 [2024-12-05 21:21:22.394519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.493 [2024-12-05 21:21:22.395046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.493 [2024-12-05 21:21:22.395054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.493 [2024-12-05 21:21:22.395060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.493 [2024-12-05 21:21:22.395066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.493 [2024-12-05 21:21:22.406251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.493 [2024-12-05 21:21:22.406661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.493 [2024-12-05 21:21:22.406677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.493 [2024-12-05 21:21:22.406684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.493 [2024-12-05 21:21:22.406853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.493 [2024-12-05 21:21:22.407021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.493 [2024-12-05 21:21:22.407029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.493 [2024-12-05 21:21:22.407035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.493 [2024-12-05 21:21:22.407041] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.493 [2024-12-05 21:21:22.419124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.493 [2024-12-05 21:21:22.419548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.493 [2024-12-05 21:21:22.419593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.493 [2024-12-05 21:21:22.419616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.493 [2024-12-05 21:21:22.420096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.493 [2024-12-05 21:21:22.420255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.494 [2024-12-05 21:21:22.420263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.494 [2024-12-05 21:21:22.420269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.494 [2024-12-05 21:21:22.420275] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.494 [2024-12-05 21:21:22.432010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.494 [2024-12-05 21:21:22.432361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.494 [2024-12-05 21:21:22.432381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.494 [2024-12-05 21:21:22.432388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.494 [2024-12-05 21:21:22.432556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.494 [2024-12-05 21:21:22.432724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.494 [2024-12-05 21:21:22.432732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.494 [2024-12-05 21:21:22.432741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.494 [2024-12-05 21:21:22.432748] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.494 [2024-12-05 21:21:22.444821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.494 [2024-12-05 21:21:22.445245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.494 [2024-12-05 21:21:22.445260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.494 [2024-12-05 21:21:22.445267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.494 [2024-12-05 21:21:22.445452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.494 [2024-12-05 21:21:22.445621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.494 [2024-12-05 21:21:22.445628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.494 [2024-12-05 21:21:22.445635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.494 [2024-12-05 21:21:22.445641] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.494 [2024-12-05 21:21:22.457555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.494 [2024-12-05 21:21:22.457983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.494 [2024-12-05 21:21:22.458026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.494 [2024-12-05 21:21:22.458049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.494 [2024-12-05 21:21:22.458652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.494 [2024-12-05 21:21:22.459225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.494 [2024-12-05 21:21:22.459233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.494 [2024-12-05 21:21:22.459239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.494 [2024-12-05 21:21:22.459245] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.494 [2024-12-05 21:21:22.470345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.494 [2024-12-05 21:21:22.470688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.494 [2024-12-05 21:21:22.470703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.494 [2024-12-05 21:21:22.470710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.494 [2024-12-05 21:21:22.470869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.494 [2024-12-05 21:21:22.471027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.494 [2024-12-05 21:21:22.471034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.494 [2024-12-05 21:21:22.471040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.494 [2024-12-05 21:21:22.471046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.494 [2024-12-05 21:21:22.483125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.494 [2024-12-05 21:21:22.483550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.494 [2024-12-05 21:21:22.483566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.494 [2024-12-05 21:21:22.483573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.494 [2024-12-05 21:21:22.483732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.494 [2024-12-05 21:21:22.483891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.494 [2024-12-05 21:21:22.483899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.494 [2024-12-05 21:21:22.483905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.494 [2024-12-05 21:21:22.483911] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.494 [2024-12-05 21:21:22.495968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.494 [2024-12-05 21:21:22.496393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.494 [2024-12-05 21:21:22.496409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.494 [2024-12-05 21:21:22.496416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.494 [2024-12-05 21:21:22.496986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.494 [2024-12-05 21:21:22.497146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.494 [2024-12-05 21:21:22.497153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.494 [2024-12-05 21:21:22.497159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.494 [2024-12-05 21:21:22.497165] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.494 [2024-12-05 21:21:22.508798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.494 [2024-12-05 21:21:22.509189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.494 [2024-12-05 21:21:22.509203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.494 [2024-12-05 21:21:22.509209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.494 [2024-12-05 21:21:22.509374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.494 [2024-12-05 21:21:22.509560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.494 [2024-12-05 21:21:22.509568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.494 [2024-12-05 21:21:22.509575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.494 [2024-12-05 21:21:22.509581] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.494 [2024-12-05 21:21:22.521779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.494 [2024-12-05 21:21:22.522211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.494 [2024-12-05 21:21:22.522227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.494 [2024-12-05 21:21:22.522237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.494 [2024-12-05 21:21:22.522412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.494 [2024-12-05 21:21:22.522583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.494 [2024-12-05 21:21:22.522591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.494 [2024-12-05 21:21:22.522597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.494 [2024-12-05 21:21:22.522603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.494 [2024-12-05 21:21:22.534517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.494 [2024-12-05 21:21:22.534936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.494 [2024-12-05 21:21:22.534952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.494 [2024-12-05 21:21:22.534959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.494 [2024-12-05 21:21:22.535117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.494 [2024-12-05 21:21:22.535276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.494 [2024-12-05 21:21:22.535284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.494 [2024-12-05 21:21:22.535290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.494 [2024-12-05 21:21:22.535296] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.494 [2024-12-05 21:21:22.547412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.494 [2024-12-05 21:21:22.547859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.494 [2024-12-05 21:21:22.547875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.494 [2024-12-05 21:21:22.547883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.494 [2024-12-05 21:21:22.548056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.495 [2024-12-05 21:21:22.548229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.495 [2024-12-05 21:21:22.548238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.495 [2024-12-05 21:21:22.548245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.495 [2024-12-05 21:21:22.548251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.495 [2024-12-05 21:21:22.560454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.495 [2024-12-05 21:21:22.560890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.495 [2024-12-05 21:21:22.560906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.495 [2024-12-05 21:21:22.560913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.495 [2024-12-05 21:21:22.561087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.495 [2024-12-05 21:21:22.561264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.495 [2024-12-05 21:21:22.561272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.495 [2024-12-05 21:21:22.561279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.495 [2024-12-05 21:21:22.561285] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.495 [2024-12-05 21:21:22.573204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.495 [2024-12-05 21:21:22.573565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.495 [2024-12-05 21:21:22.573582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.495 [2024-12-05 21:21:22.573589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.495 [2024-12-05 21:21:22.573757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.495 [2024-12-05 21:21:22.573925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.495 [2024-12-05 21:21:22.573933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.495 [2024-12-05 21:21:22.573939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.495 [2024-12-05 21:21:22.573945] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.495 [2024-12-05 21:21:22.586072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.495 [2024-12-05 21:21:22.586532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.495 [2024-12-05 21:21:22.586576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.495 [2024-12-05 21:21:22.586599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.495 [2024-12-05 21:21:22.587182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.495 [2024-12-05 21:21:22.587497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.495 [2024-12-05 21:21:22.587507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.495 [2024-12-05 21:21:22.587513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.495 [2024-12-05 21:21:22.587519] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.839 [2024-12-05 21:21:22.599113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.839 [2024-12-05 21:21:22.599546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.839 [2024-12-05 21:21:22.599563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.839 [2024-12-05 21:21:22.599570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.840 [2024-12-05 21:21:22.599743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.840 [2024-12-05 21:21:22.599916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.840 [2024-12-05 21:21:22.599923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.840 [2024-12-05 21:21:22.599933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.840 [2024-12-05 21:21:22.599939] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.840 [2024-12-05 21:21:22.612177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.840 [2024-12-05 21:21:22.612616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.840 [2024-12-05 21:21:22.612632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.840 [2024-12-05 21:21:22.612640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.840 [2024-12-05 21:21:22.612812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.840 [2024-12-05 21:21:22.612985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.840 [2024-12-05 21:21:22.612993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.840 [2024-12-05 21:21:22.613000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.840 [2024-12-05 21:21:22.613006] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.840 [2024-12-05 21:21:22.625069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.840 [2024-12-05 21:21:22.625473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.840 [2024-12-05 21:21:22.625489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.840 [2024-12-05 21:21:22.625496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.840 [2024-12-05 21:21:22.625665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.840 [2024-12-05 21:21:22.625834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.840 [2024-12-05 21:21:22.625842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.840 [2024-12-05 21:21:22.625848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.840 [2024-12-05 21:21:22.625854] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.840 [2024-12-05 21:21:22.637921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.840 [2024-12-05 21:21:22.638335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.840 [2024-12-05 21:21:22.638350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.840 [2024-12-05 21:21:22.638356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.840 [2024-12-05 21:21:22.638551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.840 [2024-12-05 21:21:22.638720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.840 [2024-12-05 21:21:22.638728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.840 [2024-12-05 21:21:22.638734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.840 [2024-12-05 21:21:22.638740] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.840 [2024-12-05 21:21:22.650792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.840 [2024-12-05 21:21:22.651239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.840 [2024-12-05 21:21:22.651255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.840 [2024-12-05 21:21:22.651262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.840 [2024-12-05 21:21:22.651443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.840 [2024-12-05 21:21:22.651624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.840 [2024-12-05 21:21:22.651632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.840 [2024-12-05 21:21:22.651638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.840 [2024-12-05 21:21:22.651644] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.840 [2024-12-05 21:21:22.663561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.840 [2024-12-05 21:21:22.663893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.840 [2024-12-05 21:21:22.663909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.840 [2024-12-05 21:21:22.663916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.840 [2024-12-05 21:21:22.664091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.840 [2024-12-05 21:21:22.664250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.840 [2024-12-05 21:21:22.664258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.840 [2024-12-05 21:21:22.664264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.840 [2024-12-05 21:21:22.664270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.840 [2024-12-05 21:21:22.676340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.840 [2024-12-05 21:21:22.676763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.840 [2024-12-05 21:21:22.676778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.840 [2024-12-05 21:21:22.676785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.840 [2024-12-05 21:21:22.676944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.840 [2024-12-05 21:21:22.677103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.840 [2024-12-05 21:21:22.677110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.840 [2024-12-05 21:21:22.677117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.840 [2024-12-05 21:21:22.677122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.840 [2024-12-05 21:21:22.689106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.840 [2024-12-05 21:21:22.689527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.840 [2024-12-05 21:21:22.689543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.840 [2024-12-05 21:21:22.689553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.840 [2024-12-05 21:21:22.689712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.840 [2024-12-05 21:21:22.689871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.840 [2024-12-05 21:21:22.689878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.840 [2024-12-05 21:21:22.689884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.840 [2024-12-05 21:21:22.689890] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.840 [2024-12-05 21:21:22.701962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.840 [2024-12-05 21:21:22.702350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.840 [2024-12-05 21:21:22.702372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.840 [2024-12-05 21:21:22.702380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.840 [2024-12-05 21:21:22.702547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.840 [2024-12-05 21:21:22.702716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.840 [2024-12-05 21:21:22.702725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.840 [2024-12-05 21:21:22.702731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.840 [2024-12-05 21:21:22.702737] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.840 [2024-12-05 21:21:22.714918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.840 [2024-12-05 21:21:22.715253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.840 [2024-12-05 21:21:22.715296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.840 [2024-12-05 21:21:22.715319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.840 [2024-12-05 21:21:22.715784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.840 [2024-12-05 21:21:22.715945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.840 [2024-12-05 21:21:22.715953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.840 [2024-12-05 21:21:22.715959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.840 [2024-12-05 21:21:22.715965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.840 [2024-12-05 21:21:22.727768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.841 [2024-12-05 21:21:22.728133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.841 [2024-12-05 21:21:22.728149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.841 [2024-12-05 21:21:22.728156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.841 [2024-12-05 21:21:22.728323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.841 [2024-12-05 21:21:22.728503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.841 [2024-12-05 21:21:22.728512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.841 [2024-12-05 21:21:22.728518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.841 [2024-12-05 21:21:22.728524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.841 [2024-12-05 21:21:22.740558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.841 [2024-12-05 21:21:22.740969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.841 [2024-12-05 21:21:22.740985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.841 [2024-12-05 21:21:22.740992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.841 [2024-12-05 21:21:22.741159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.841 [2024-12-05 21:21:22.741327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.841 [2024-12-05 21:21:22.741335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.841 [2024-12-05 21:21:22.741342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.841 [2024-12-05 21:21:22.741348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.841 [2024-12-05 21:21:22.753475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.841 [2024-12-05 21:21:22.753826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.841 [2024-12-05 21:21:22.753841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.841 [2024-12-05 21:21:22.753848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.841 [2024-12-05 21:21:22.754016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.841 [2024-12-05 21:21:22.754185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.841 [2024-12-05 21:21:22.754194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.841 [2024-12-05 21:21:22.754200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.841 [2024-12-05 21:21:22.754207] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.841 [2024-12-05 21:21:22.766479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.841 [2024-12-05 21:21:22.766912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.841 [2024-12-05 21:21:22.766955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.841 [2024-12-05 21:21:22.766977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.841 [2024-12-05 21:21:22.767577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.841 [2024-12-05 21:21:22.768022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.841 [2024-12-05 21:21:22.768030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.841 [2024-12-05 21:21:22.768036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.841 [2024-12-05 21:21:22.768048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.841 [2024-12-05 21:21:22.779356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.841 [2024-12-05 21:21:22.779676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.841 [2024-12-05 21:21:22.779718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.841 [2024-12-05 21:21:22.779740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.841 [2024-12-05 21:21:22.780324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.841 [2024-12-05 21:21:22.780933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.841 [2024-12-05 21:21:22.780942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.841 [2024-12-05 21:21:22.780948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.841 [2024-12-05 21:21:22.780954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.841 [2024-12-05 21:21:22.792242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.841 [2024-12-05 21:21:22.792538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.841 [2024-12-05 21:21:22.792554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.841 [2024-12-05 21:21:22.792561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.841 [2024-12-05 21:21:22.792729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.841 [2024-12-05 21:21:22.792901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.841 [2024-12-05 21:21:22.792909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.841 [2024-12-05 21:21:22.792916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.841 [2024-12-05 21:21:22.792922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.841 [2024-12-05 21:21:22.804996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.841 [2024-12-05 21:21:22.805308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.841 [2024-12-05 21:21:22.805324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.841 [2024-12-05 21:21:22.805331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.841 [2024-12-05 21:21:22.805508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.841 [2024-12-05 21:21:22.805682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.841 [2024-12-05 21:21:22.805691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.841 [2024-12-05 21:21:22.805697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.841 [2024-12-05 21:21:22.805703] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.841 [2024-12-05 21:21:22.818103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.841 [2024-12-05 21:21:22.818436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.841 [2024-12-05 21:21:22.818453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.841 [2024-12-05 21:21:22.818460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.841 [2024-12-05 21:21:22.818634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.841 [2024-12-05 21:21:22.818808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.841 [2024-12-05 21:21:22.818816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.841 [2024-12-05 21:21:22.818823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.841 [2024-12-05 21:21:22.818829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.841 6057.40 IOPS, 23.66 MiB/s [2024-12-05T20:21:22.949Z] [2024-12-05 21:21:22.831031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.841 [2024-12-05 21:21:22.831312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.841 [2024-12-05 21:21:22.831328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.841 [2024-12-05 21:21:22.831335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.841 [2024-12-05 21:21:22.831532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.841 [2024-12-05 21:21:22.831707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.841 [2024-12-05 21:21:22.831715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.841 [2024-12-05 21:21:22.831721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.841 [2024-12-05 21:21:22.831728] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.841 [2024-12-05 21:21:22.844157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.841 [2024-12-05 21:21:22.844544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.841 [2024-12-05 21:21:22.844561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.841 [2024-12-05 21:21:22.844569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.841 [2024-12-05 21:21:22.844742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.841 [2024-12-05 21:21:22.844916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.841 [2024-12-05 21:21:22.844924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.841 [2024-12-05 21:21:22.844930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.842 [2024-12-05 21:21:22.844936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.842 [2024-12-05 21:21:22.857198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.842 [2024-12-05 21:21:22.857540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.842 [2024-12-05 21:21:22.857556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.842 [2024-12-05 21:21:22.857567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.842 [2024-12-05 21:21:22.857741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.842 [2024-12-05 21:21:22.857915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.842 [2024-12-05 21:21:22.857924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.842 [2024-12-05 21:21:22.857930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.842 [2024-12-05 21:21:22.857936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.842 [2024-12-05 21:21:22.870457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.842 [2024-12-05 21:21:22.870809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.842 [2024-12-05 21:21:22.870826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.842 [2024-12-05 21:21:22.870834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.842 [2024-12-05 21:21:22.871017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.842 [2024-12-05 21:21:22.871201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.842 [2024-12-05 21:21:22.871210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.842 [2024-12-05 21:21:22.871216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.842 [2024-12-05 21:21:22.871223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.842 [2024-12-05 21:21:22.883756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.842 [2024-12-05 21:21:22.884173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.842 [2024-12-05 21:21:22.884190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.842 [2024-12-05 21:21:22.884198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.842 [2024-12-05 21:21:22.884387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.842 [2024-12-05 21:21:22.884571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.842 [2024-12-05 21:21:22.884578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.842 [2024-12-05 21:21:22.884585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.842 [2024-12-05 21:21:22.884591] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.842 [2024-12-05 21:21:22.896793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.842 [2024-12-05 21:21:22.897077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.842 [2024-12-05 21:21:22.897093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.842 [2024-12-05 21:21:22.897100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.842 [2024-12-05 21:21:22.897273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.842 [2024-12-05 21:21:22.897453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.842 [2024-12-05 21:21:22.897461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.842 [2024-12-05 21:21:22.897468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.842 [2024-12-05 21:21:22.897473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.842 [2024-12-05 21:21:22.909894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.842 [2024-12-05 21:21:22.910220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.842 [2024-12-05 21:21:22.910238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:14.842 [2024-12-05 21:21:22.910245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:14.842 [2024-12-05 21:21:22.910424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:14.842 [2024-12-05 21:21:22.910598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.842 [2024-12-05 21:21:22.910607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.842 [2024-12-05 21:21:22.910613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.842 [2024-12-05 21:21:22.910620] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.111 [2024-12-05 21:21:22.922889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.111 [2024-12-05 21:21:22.923251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.111 [2024-12-05 21:21:22.923267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.111 [2024-12-05 21:21:22.923274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.111 [2024-12-05 21:21:22.923451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.111 [2024-12-05 21:21:22.923625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.111 [2024-12-05 21:21:22.923632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.111 [2024-12-05 21:21:22.923638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.111 [2024-12-05 21:21:22.923644] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.111 [2024-12-05 21:21:22.935895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.111 [2024-12-05 21:21:22.936260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.111 [2024-12-05 21:21:22.936276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.111 [2024-12-05 21:21:22.936284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.111 [2024-12-05 21:21:22.936457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.111 [2024-12-05 21:21:22.936626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.111 [2024-12-05 21:21:22.936634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.111 [2024-12-05 21:21:22.936641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.111 [2024-12-05 21:21:22.936651] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.111 [2024-12-05 21:21:22.948880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.111 [2024-12-05 21:21:22.949213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.111 [2024-12-05 21:21:22.949229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.111 [2024-12-05 21:21:22.949236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.111 [2024-12-05 21:21:22.949413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.111 [2024-12-05 21:21:22.949582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.111 [2024-12-05 21:21:22.949590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.111 [2024-12-05 21:21:22.949596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.111 [2024-12-05 21:21:22.949603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.111 [2024-12-05 21:21:22.962006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.111 [2024-12-05 21:21:22.962291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.111 [2024-12-05 21:21:22.962306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.111 [2024-12-05 21:21:22.962314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.111 [2024-12-05 21:21:22.962492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.111 [2024-12-05 21:21:22.962665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.112 [2024-12-05 21:21:22.962672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.112 [2024-12-05 21:21:22.962678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.112 [2024-12-05 21:21:22.962684] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.112 [2024-12-05 21:21:22.975007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.112 [2024-12-05 21:21:22.975362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.112 [2024-12-05 21:21:22.975384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.112 [2024-12-05 21:21:22.975391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.112 [2024-12-05 21:21:22.975559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.112 [2024-12-05 21:21:22.975728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.112 [2024-12-05 21:21:22.975736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.112 [2024-12-05 21:21:22.975742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.112 [2024-12-05 21:21:22.975749] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.112 [2024-12-05 21:21:22.987966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.112 [2024-12-05 21:21:22.988352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.112 [2024-12-05 21:21:22.988372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.112 [2024-12-05 21:21:22.988383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.112 [2024-12-05 21:21:22.988552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.112 [2024-12-05 21:21:22.988721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.112 [2024-12-05 21:21:22.988729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.112 [2024-12-05 21:21:22.988735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.112 [2024-12-05 21:21:22.988741] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.112 [2024-12-05 21:21:23.000715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.112 [2024-12-05 21:21:23.001086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.112 [2024-12-05 21:21:23.001130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.112 [2024-12-05 21:21:23.001153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.112 [2024-12-05 21:21:23.001622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.112 [2024-12-05 21:21:23.001791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.112 [2024-12-05 21:21:23.001800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.112 [2024-12-05 21:21:23.001806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.112 [2024-12-05 21:21:23.001813] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.112 [2024-12-05 21:21:23.013555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.112 [2024-12-05 21:21:23.013867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.112 [2024-12-05 21:21:23.013882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.112 [2024-12-05 21:21:23.013890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.112 [2024-12-05 21:21:23.014058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.112 [2024-12-05 21:21:23.014225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.112 [2024-12-05 21:21:23.014234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.112 [2024-12-05 21:21:23.014240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.112 [2024-12-05 21:21:23.014246] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.112 [2024-12-05 21:21:23.026355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.112 [2024-12-05 21:21:23.026713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.112 [2024-12-05 21:21:23.026729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.112 [2024-12-05 21:21:23.026736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.112 [2024-12-05 21:21:23.026907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.112 [2024-12-05 21:21:23.027076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.112 [2024-12-05 21:21:23.027084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.112 [2024-12-05 21:21:23.027090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.112 [2024-12-05 21:21:23.027096] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.112 [2024-12-05 21:21:23.039147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.112 [2024-12-05 21:21:23.039452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.112 [2024-12-05 21:21:23.039469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.112 [2024-12-05 21:21:23.039476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.112 [2024-12-05 21:21:23.039644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.112 [2024-12-05 21:21:23.039811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.112 [2024-12-05 21:21:23.039819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.112 [2024-12-05 21:21:23.039826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.112 [2024-12-05 21:21:23.039832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.112 [2024-12-05 21:21:23.051973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.112 [2024-12-05 21:21:23.052335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.112 [2024-12-05 21:21:23.052395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.112 [2024-12-05 21:21:23.052427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.112 [2024-12-05 21:21:23.052891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.112 [2024-12-05 21:21:23.053059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.112 [2024-12-05 21:21:23.053067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.112 [2024-12-05 21:21:23.053074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.112 [2024-12-05 21:21:23.053080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.112 [2024-12-05 21:21:23.064841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.112 [2024-12-05 21:21:23.065134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.112 [2024-12-05 21:21:23.065165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.112 [2024-12-05 21:21:23.065173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.112 [2024-12-05 21:21:23.065347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.112 [2024-12-05 21:21:23.065531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.112 [2024-12-05 21:21:23.065544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.112 [2024-12-05 21:21:23.065550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.112 [2024-12-05 21:21:23.065557] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.112 [2024-12-05 21:21:23.077884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.112 [2024-12-05 21:21:23.078304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.112 [2024-12-05 21:21:23.078320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.112 [2024-12-05 21:21:23.078327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.112 [2024-12-05 21:21:23.078507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.112 [2024-12-05 21:21:23.078680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.112 [2024-12-05 21:21:23.078689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.112 [2024-12-05 21:21:23.078695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.112 [2024-12-05 21:21:23.078701] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.112 [2024-12-05 21:21:23.090803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.112 [2024-12-05 21:21:23.091162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.112 [2024-12-05 21:21:23.091178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.112 [2024-12-05 21:21:23.091185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.112 [2024-12-05 21:21:23.091353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.112 [2024-12-05 21:21:23.091528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.112 [2024-12-05 21:21:23.091537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.112 [2024-12-05 21:21:23.091543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.112 [2024-12-05 21:21:23.091549] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.112 [2024-12-05 21:21:23.103624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.112 [2024-12-05 21:21:23.103973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.112 [2024-12-05 21:21:23.103989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.112 [2024-12-05 21:21:23.103996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.112 [2024-12-05 21:21:23.104163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.112 [2024-12-05 21:21:23.104332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.112 [2024-12-05 21:21:23.104340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.112 [2024-12-05 21:21:23.104347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.112 [2024-12-05 21:21:23.104356] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.112 [2024-12-05 21:21:23.116548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.112 [2024-12-05 21:21:23.116881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.112 [2024-12-05 21:21:23.116896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.112 [2024-12-05 21:21:23.116903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.112 [2024-12-05 21:21:23.117071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.112 [2024-12-05 21:21:23.117239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.112 [2024-12-05 21:21:23.117248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.112 [2024-12-05 21:21:23.117254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.112 [2024-12-05 21:21:23.117261] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.112 [2024-12-05 21:21:23.129336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.112 [2024-12-05 21:21:23.129708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.112 [2024-12-05 21:21:23.129725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.112 [2024-12-05 21:21:23.129732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.112 [2024-12-05 21:21:23.129901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.112 [2024-12-05 21:21:23.130069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.112 [2024-12-05 21:21:23.130077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.112 [2024-12-05 21:21:23.130083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.112 [2024-12-05 21:21:23.130089] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.112 [2024-12-05 21:21:23.142183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.112 [2024-12-05 21:21:23.142636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.112 [2024-12-05 21:21:23.142652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.112 [2024-12-05 21:21:23.142660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.112 [2024-12-05 21:21:23.142828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.112 [2024-12-05 21:21:23.142997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.112 [2024-12-05 21:21:23.143005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.112 [2024-12-05 21:21:23.143011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.112 [2024-12-05 21:21:23.143017] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.112 [2024-12-05 21:21:23.155053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.112 [2024-12-05 21:21:23.155494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.112 [2024-12-05 21:21:23.155510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.112 [2024-12-05 21:21:23.155517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.112 [2024-12-05 21:21:23.155685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.112 [2024-12-05 21:21:23.155858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.112 [2024-12-05 21:21:23.155866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.112 [2024-12-05 21:21:23.155873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.112 [2024-12-05 21:21:23.155879] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.112 [2024-12-05 21:21:23.167844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.112 [2024-12-05 21:21:23.168301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.112 [2024-12-05 21:21:23.168345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.112 [2024-12-05 21:21:23.168380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.113 [2024-12-05 21:21:23.168865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.113 [2024-12-05 21:21:23.169035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.113 [2024-12-05 21:21:23.169043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.113 [2024-12-05 21:21:23.169049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.113 [2024-12-05 21:21:23.169055] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.113 [2024-12-05 21:21:23.180684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.113 [2024-12-05 21:21:23.181074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.113 [2024-12-05 21:21:23.181090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.113 [2024-12-05 21:21:23.181096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.113 [2024-12-05 21:21:23.181256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.113 [2024-12-05 21:21:23.181439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.113 [2024-12-05 21:21:23.181448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.113 [2024-12-05 21:21:23.181455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.113 [2024-12-05 21:21:23.181460] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.113 [2024-12-05 21:21:23.193526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.113 [2024-12-05 21:21:23.193859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.113 [2024-12-05 21:21:23.193874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.113 [2024-12-05 21:21:23.193881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.113 [2024-12-05 21:21:23.194044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.113 [2024-12-05 21:21:23.194203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.113 [2024-12-05 21:21:23.194211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.113 [2024-12-05 21:21:23.194217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.113 [2024-12-05 21:21:23.194222] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.113 [2024-12-05 21:21:23.206317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.113 [2024-12-05 21:21:23.206744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.113 [2024-12-05 21:21:23.206760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.113 [2024-12-05 21:21:23.206766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.113 [2024-12-05 21:21:23.206925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.113 [2024-12-05 21:21:23.207084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.113 [2024-12-05 21:21:23.207092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.113 [2024-12-05 21:21:23.207098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.113 [2024-12-05 21:21:23.207104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.374 [2024-12-05 21:21:23.219308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.374 [2024-12-05 21:21:23.219740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.374 [2024-12-05 21:21:23.219757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.374 [2024-12-05 21:21:23.219764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.374 [2024-12-05 21:21:23.219931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.374 [2024-12-05 21:21:23.220100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.374 [2024-12-05 21:21:23.220109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.374 [2024-12-05 21:21:23.220115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.374 [2024-12-05 21:21:23.220121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.374 [2024-12-05 21:21:23.232287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.374 [2024-12-05 21:21:23.232716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.374 [2024-12-05 21:21:23.232733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.374 [2024-12-05 21:21:23.232740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.374 [2024-12-05 21:21:23.232909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.374 [2024-12-05 21:21:23.233078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.374 [2024-12-05 21:21:23.233090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.374 [2024-12-05 21:21:23.233097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.374 [2024-12-05 21:21:23.233104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.374 [2024-12-05 21:21:23.245039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.374 [2024-12-05 21:21:23.245478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.374 [2024-12-05 21:21:23.245495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.374 [2024-12-05 21:21:23.245502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.374 [2024-12-05 21:21:23.245670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.374 [2024-12-05 21:21:23.245839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.374 [2024-12-05 21:21:23.245847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.374 [2024-12-05 21:21:23.245854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.374 [2024-12-05 21:21:23.245860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.374 [2024-12-05 21:21:23.257884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.374 [2024-12-05 21:21:23.258278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.374 [2024-12-05 21:21:23.258294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.374 [2024-12-05 21:21:23.258301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.374 [2024-12-05 21:21:23.258475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.374 [2024-12-05 21:21:23.258643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.374 [2024-12-05 21:21:23.258651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.374 [2024-12-05 21:21:23.258657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.374 [2024-12-05 21:21:23.258663] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.374 [2024-12-05 21:21:23.270670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.374 [2024-12-05 21:21:23.271098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.374 [2024-12-05 21:21:23.271142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.374 [2024-12-05 21:21:23.271166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.374 [2024-12-05 21:21:23.271769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.374 [2024-12-05 21:21:23.272222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.374 [2024-12-05 21:21:23.272230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.374 [2024-12-05 21:21:23.272236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.374 [2024-12-05 21:21:23.272246] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.374 [2024-12-05 21:21:23.283416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.374 [2024-12-05 21:21:23.283834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.374 [2024-12-05 21:21:23.283849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.374 [2024-12-05 21:21:23.283856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.374 [2024-12-05 21:21:23.284024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.374 [2024-12-05 21:21:23.284193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.374 [2024-12-05 21:21:23.284201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.374 [2024-12-05 21:21:23.284208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.374 [2024-12-05 21:21:23.284214] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.374 [2024-12-05 21:21:23.296217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.374 [2024-12-05 21:21:23.296650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.374 [2024-12-05 21:21:23.296666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.374 [2024-12-05 21:21:23.296673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.374 [2024-12-05 21:21:23.296841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.374 [2024-12-05 21:21:23.297013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.374 [2024-12-05 21:21:23.297021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.374 [2024-12-05 21:21:23.297028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.374 [2024-12-05 21:21:23.297034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.374 [2024-12-05 21:21:23.308964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.374 [2024-12-05 21:21:23.309354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.374 [2024-12-05 21:21:23.309374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.374 [2024-12-05 21:21:23.309381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.374 [2024-12-05 21:21:23.309549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.374 [2024-12-05 21:21:23.309718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.374 [2024-12-05 21:21:23.309726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.374 [2024-12-05 21:21:23.309732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.374 [2024-12-05 21:21:23.309738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.374 [2024-12-05 21:21:23.321809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.374 [2024-12-05 21:21:23.322232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.374 [2024-12-05 21:21:23.322250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.374 [2024-12-05 21:21:23.322257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.374 [2024-12-05 21:21:23.322450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.374 [2024-12-05 21:21:23.322624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.374 [2024-12-05 21:21:23.322633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.374 [2024-12-05 21:21:23.322639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.374 [2024-12-05 21:21:23.322645] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.374 [2024-12-05 21:21:23.334831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.374 [2024-12-05 21:21:23.335236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.374 [2024-12-05 21:21:23.335251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.374 [2024-12-05 21:21:23.335258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.374 [2024-12-05 21:21:23.335438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.374 [2024-12-05 21:21:23.335611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.374 [2024-12-05 21:21:23.335620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.374 [2024-12-05 21:21:23.335626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.374 [2024-12-05 21:21:23.335632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.374 [2024-12-05 21:21:23.347746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.374 [2024-12-05 21:21:23.348136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.374 [2024-12-05 21:21:23.348178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.374 [2024-12-05 21:21:23.348201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.374 [2024-12-05 21:21:23.348796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.374 [2024-12-05 21:21:23.349361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.374 [2024-12-05 21:21:23.349374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.374 [2024-12-05 21:21:23.349381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.374 [2024-12-05 21:21:23.349387] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.374 [2024-12-05 21:21:23.360576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.374 [2024-12-05 21:21:23.361015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.374 [2024-12-05 21:21:23.361031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.374 [2024-12-05 21:21:23.361038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.374 [2024-12-05 21:21:23.361209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.374 [2024-12-05 21:21:23.361385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.374 [2024-12-05 21:21:23.361394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.374 [2024-12-05 21:21:23.361401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.374 [2024-12-05 21:21:23.361406] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.374 [2024-12-05 21:21:23.373380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.374 [2024-12-05 21:21:23.373820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.374 [2024-12-05 21:21:23.373836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.374 [2024-12-05 21:21:23.373844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.374 [2024-12-05 21:21:23.374012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.374 [2024-12-05 21:21:23.374181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.374 [2024-12-05 21:21:23.374190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.374 [2024-12-05 21:21:23.374197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.374 [2024-12-05 21:21:23.374203] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.374 [2024-12-05 21:21:23.386266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.374 [2024-12-05 21:21:23.386673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.374 [2024-12-05 21:21:23.386689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.374 [2024-12-05 21:21:23.386696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.374 [2024-12-05 21:21:23.386864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.374 [2024-12-05 21:21:23.387032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.374 [2024-12-05 21:21:23.387040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.374 [2024-12-05 21:21:23.387047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.374 [2024-12-05 21:21:23.387053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.374 [2024-12-05 21:21:23.399263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.374 [2024-12-05 21:21:23.399666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.374 [2024-12-05 21:21:23.399681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.374 [2024-12-05 21:21:23.399688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.374 [2024-12-05 21:21:23.399856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.374 [2024-12-05 21:21:23.400028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.374 [2024-12-05 21:21:23.400041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.375 [2024-12-05 21:21:23.400047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.375 [2024-12-05 21:21:23.400054] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.375 [2024-12-05 21:21:23.412245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.375 [2024-12-05 21:21:23.412695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.375 [2024-12-05 21:21:23.412712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.375 [2024-12-05 21:21:23.412719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.375 [2024-12-05 21:21:23.412892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.375 [2024-12-05 21:21:23.413064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.375 [2024-12-05 21:21:23.413072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.375 [2024-12-05 21:21:23.413078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.375 [2024-12-05 21:21:23.413084] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.375 [2024-12-05 21:21:23.425286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.375 [2024-12-05 21:21:23.425722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.375 [2024-12-05 21:21:23.425739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.375 [2024-12-05 21:21:23.425746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.375 [2024-12-05 21:21:23.425914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.375 [2024-12-05 21:21:23.426083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.375 [2024-12-05 21:21:23.426091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.375 [2024-12-05 21:21:23.426098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.375 [2024-12-05 21:21:23.426105] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.375 [2024-12-05 21:21:23.438135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.375 [2024-12-05 21:21:23.438483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.375 [2024-12-05 21:21:23.438499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.375 [2024-12-05 21:21:23.438506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.375 [2024-12-05 21:21:23.438674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.375 [2024-12-05 21:21:23.438843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.375 [2024-12-05 21:21:23.438851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.375 [2024-12-05 21:21:23.438857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.375 [2024-12-05 21:21:23.438863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.375 [2024-12-05 21:21:23.450971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.375 [2024-12-05 21:21:23.451391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.375 [2024-12-05 21:21:23.451408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.375 [2024-12-05 21:21:23.451415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.375 [2024-12-05 21:21:23.451591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.375 [2024-12-05 21:21:23.451750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.375 [2024-12-05 21:21:23.451759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.375 [2024-12-05 21:21:23.451765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.375 [2024-12-05 21:21:23.451770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.375 [2024-12-05 21:21:23.463819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.375 [2024-12-05 21:21:23.464211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.375 [2024-12-05 21:21:23.464226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.375 [2024-12-05 21:21:23.464233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.375 [2024-12-05 21:21:23.464415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.375 [2024-12-05 21:21:23.464583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.375 [2024-12-05 21:21:23.464592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.375 [2024-12-05 21:21:23.464598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.375 [2024-12-05 21:21:23.464604] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.375 [2024-12-05 21:21:23.476879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.375 [2024-12-05 21:21:23.477289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.375 [2024-12-05 21:21:23.477305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.375 [2024-12-05 21:21:23.477312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.375 [2024-12-05 21:21:23.477493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.375 [2024-12-05 21:21:23.477667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.375 [2024-12-05 21:21:23.477675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.375 [2024-12-05 21:21:23.477681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.375 [2024-12-05 21:21:23.477688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.635 [2024-12-05 21:21:23.489711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.635 [2024-12-05 21:21:23.490123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.635 [2024-12-05 21:21:23.490142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.635 [2024-12-05 21:21:23.490149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.635 [2024-12-05 21:21:23.490317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.635 [2024-12-05 21:21:23.490492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.635 [2024-12-05 21:21:23.490501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.635 [2024-12-05 21:21:23.490507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.635 [2024-12-05 21:21:23.490513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1463569 Killed "${NVMF_APP[@]}" "$@" 00:28:15.635 21:21:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:15.635 21:21:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:15.635 21:21:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:15.635 21:21:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:15.635 21:21:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:15.635 [2024-12-05 21:21:23.502754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.635 [2024-12-05 21:21:23.503176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.635 [2024-12-05 21:21:23.503192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.635 [2024-12-05 21:21:23.503200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.635 [2024-12-05 21:21:23.503376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.635 [2024-12-05 21:21:23.503550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.635 [2024-12-05 21:21:23.503558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.635 [2024-12-05 21:21:23.503565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.635 [2024-12-05 21:21:23.503571] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.635 21:21:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1464976 00:28:15.635 21:21:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1464976 00:28:15.635 21:21:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:15.635 21:21:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1464976 ']' 00:28:15.635 21:21:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.635 21:21:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:15.635 21:21:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.635 21:21:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:15.635 21:21:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:15.635 [2024-12-05 21:21:23.515821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.635 [2024-12-05 21:21:23.516232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.635 [2024-12-05 21:21:23.516246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.635 [2024-12-05 21:21:23.516253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.635 [2024-12-05 21:21:23.516432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.635 [2024-12-05 21:21:23.516606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.635 [2024-12-05 21:21:23.516614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.635 [2024-12-05 21:21:23.516621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.635 [2024-12-05 21:21:23.516627] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.635 [2024-12-05 21:21:23.528886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.635 [2024-12-05 21:21:23.529240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.635 [2024-12-05 21:21:23.529255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.635 [2024-12-05 21:21:23.529263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.635 [2024-12-05 21:21:23.529445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.635 [2024-12-05 21:21:23.529619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.635 [2024-12-05 21:21:23.529627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.635 [2024-12-05 21:21:23.529635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.635 [2024-12-05 21:21:23.529642] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.635 [2024-12-05 21:21:23.541901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.635 [2024-12-05 21:21:23.542336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.635 [2024-12-05 21:21:23.542352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.635 [2024-12-05 21:21:23.542359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.635 [2024-12-05 21:21:23.542538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.635 [2024-12-05 21:21:23.542716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.635 [2024-12-05 21:21:23.542724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.635 [2024-12-05 21:21:23.542731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.635 [2024-12-05 21:21:23.542737] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.635 [2024-12-05 21:21:23.554580] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:28:15.635 [2024-12-05 21:21:23.554616] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.635 [2024-12-05 21:21:23.554882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.635 [2024-12-05 21:21:23.555288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.635 [2024-12-05 21:21:23.555302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.635 [2024-12-05 21:21:23.555309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.635 [2024-12-05 21:21:23.555488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.635 [2024-12-05 21:21:23.555658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.635 [2024-12-05 21:21:23.555666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.635 [2024-12-05 21:21:23.555673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.635 [2024-12-05 21:21:23.555680] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.635 [2024-12-05 21:21:23.567907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.635 [2024-12-05 21:21:23.568337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.635 [2024-12-05 21:21:23.568352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.635 [2024-12-05 21:21:23.568359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.635 [2024-12-05 21:21:23.568534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.635 [2024-12-05 21:21:23.568703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.635 [2024-12-05 21:21:23.568711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.635 [2024-12-05 21:21:23.568718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.635 [2024-12-05 21:21:23.568724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.635 [2024-12-05 21:21:23.580941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.635 [2024-12-05 21:21:23.581388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.635 [2024-12-05 21:21:23.581405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.635 [2024-12-05 21:21:23.581412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.635 [2024-12-05 21:21:23.581585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.635 [2024-12-05 21:21:23.581758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.635 [2024-12-05 21:21:23.581767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.635 [2024-12-05 21:21:23.581774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.635 [2024-12-05 21:21:23.581780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.635 [2024-12-05 21:21:23.594040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.635 [2024-12-05 21:21:23.594472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.635 [2024-12-05 21:21:23.594489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.635 [2024-12-05 21:21:23.594499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.635 [2024-12-05 21:21:23.594673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.635 [2024-12-05 21:21:23.594847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.635 [2024-12-05 21:21:23.594855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.635 [2024-12-05 21:21:23.594862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.635 [2024-12-05 21:21:23.594868] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.635 [2024-12-05 21:21:23.607101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.635 [2024-12-05 21:21:23.607550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.635 [2024-12-05 21:21:23.607567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.635 [2024-12-05 21:21:23.607575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.635 [2024-12-05 21:21:23.607749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.635 [2024-12-05 21:21:23.607923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.635 [2024-12-05 21:21:23.607931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.635 [2024-12-05 21:21:23.607938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.635 [2024-12-05 21:21:23.607944] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.635 [2024-12-05 21:21:23.620133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.635 [2024-12-05 21:21:23.620566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.635 [2024-12-05 21:21:23.620583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.636 [2024-12-05 21:21:23.620591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.636 [2024-12-05 21:21:23.620764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.636 [2024-12-05 21:21:23.620937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.636 [2024-12-05 21:21:23.620947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.636 [2024-12-05 21:21:23.620955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.636 [2024-12-05 21:21:23.620962] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.636 [2024-12-05 21:21:23.633054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.636 [2024-12-05 21:21:23.633479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.636 [2024-12-05 21:21:23.633496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.636 [2024-12-05 21:21:23.633504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.636 [2024-12-05 21:21:23.633672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.636 [2024-12-05 21:21:23.633845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.636 [2024-12-05 21:21:23.633854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.636 [2024-12-05 21:21:23.633860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.636 [2024-12-05 21:21:23.633866] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.636 [2024-12-05 21:21:23.635554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:15.636 [2024-12-05 21:21:23.645950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.636 [2024-12-05 21:21:23.646390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.636 [2024-12-05 21:21:23.646407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.636 [2024-12-05 21:21:23.646415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.636 [2024-12-05 21:21:23.646584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.636 [2024-12-05 21:21:23.646757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.636 [2024-12-05 21:21:23.646766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.636 [2024-12-05 21:21:23.646774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.636 [2024-12-05 21:21:23.646780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.636 [2024-12-05 21:21:23.658891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.636 [2024-12-05 21:21:23.659297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.636 [2024-12-05 21:21:23.659313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.636 [2024-12-05 21:21:23.659321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.636 [2024-12-05 21:21:23.659497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.636 [2024-12-05 21:21:23.659667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.636 [2024-12-05 21:21:23.659676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.636 [2024-12-05 21:21:23.659683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.636 [2024-12-05 21:21:23.659689] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.636 [2024-12-05 21:21:23.671892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.636 [2024-12-05 21:21:23.672301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.636 [2024-12-05 21:21:23.672316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.636 [2024-12-05 21:21:23.672324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.636 [2024-12-05 21:21:23.672500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.636 [2024-12-05 21:21:23.672670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.636 [2024-12-05 21:21:23.672678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.636 [2024-12-05 21:21:23.672689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.636 [2024-12-05 21:21:23.672695] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.636 [2024-12-05 21:21:23.676469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.636 [2024-12-05 21:21:23.676492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.636 [2024-12-05 21:21:23.676499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.636 [2024-12-05 21:21:23.676504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.636 [2024-12-05 21:21:23.676510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.636 [2024-12-05 21:21:23.677857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:15.636 [2024-12-05 21:21:23.677967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.636 [2024-12-05 21:21:23.677968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:15.636 [2024-12-05 21:21:23.684998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.636 [2024-12-05 21:21:23.685454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.636 [2024-12-05 21:21:23.685473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.636 [2024-12-05 21:21:23.685482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.636 [2024-12-05 21:21:23.685657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.636 [2024-12-05 21:21:23.685832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.636 [2024-12-05 21:21:23.685840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.636 [2024-12-05 21:21:23.685848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.636 [2024-12-05 21:21:23.685856] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.636 [2024-12-05 21:21:23.698109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.636 [2024-12-05 21:21:23.698549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.636 [2024-12-05 21:21:23.698570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.636 [2024-12-05 21:21:23.698578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.636 [2024-12-05 21:21:23.698754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.636 [2024-12-05 21:21:23.698927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.636 [2024-12-05 21:21:23.698936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.636 [2024-12-05 21:21:23.698944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.636 [2024-12-05 21:21:23.698951] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.636 [2024-12-05 21:21:23.711196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.636 [2024-12-05 21:21:23.711636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.636 [2024-12-05 21:21:23.711656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.636 [2024-12-05 21:21:23.711672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.636 [2024-12-05 21:21:23.711847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.636 [2024-12-05 21:21:23.712021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.636 [2024-12-05 21:21:23.712029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.636 [2024-12-05 21:21:23.712036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.636 [2024-12-05 21:21:23.712044] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.636 [2024-12-05 21:21:23.724287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.636 [2024-12-05 21:21:23.724704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.636 [2024-12-05 21:21:23.724724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.636 [2024-12-05 21:21:23.724733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.636 [2024-12-05 21:21:23.724909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.636 [2024-12-05 21:21:23.725082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.636 [2024-12-05 21:21:23.725090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.636 [2024-12-05 21:21:23.725097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.636 [2024-12-05 21:21:23.725104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.636 [2024-12-05 21:21:23.737343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.636 [2024-12-05 21:21:23.737777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.636 [2024-12-05 21:21:23.737796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.636 [2024-12-05 21:21:23.737805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.636 [2024-12-05 21:21:23.737979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.636 [2024-12-05 21:21:23.738154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.636 [2024-12-05 21:21:23.738162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.636 [2024-12-05 21:21:23.738170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.636 [2024-12-05 21:21:23.738177] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.896 [2024-12-05 21:21:23.750426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.896 [2024-12-05 21:21:23.750832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-05 21:21:23.750849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.896 [2024-12-05 21:21:23.750856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.896 [2024-12-05 21:21:23.751030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.896 [2024-12-05 21:21:23.751208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.896 [2024-12-05 21:21:23.751216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.896 [2024-12-05 21:21:23.751224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.896 [2024-12-05 21:21:23.751230] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.896 [2024-12-05 21:21:23.763430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.896 [2024-12-05 21:21:23.763835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-05 21:21:23.763851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.896 [2024-12-05 21:21:23.763858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.896 [2024-12-05 21:21:23.764032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.896 [2024-12-05 21:21:23.764205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.896 [2024-12-05 21:21:23.764213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.896 [2024-12-05 21:21:23.764220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.896 [2024-12-05 21:21:23.764227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.896 [2024-12-05 21:21:23.776457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.896 [2024-12-05 21:21:23.776885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-05 21:21:23.776901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.896 [2024-12-05 21:21:23.776908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.896 [2024-12-05 21:21:23.777081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.896 [2024-12-05 21:21:23.777255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.896 [2024-12-05 21:21:23.777263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.896 [2024-12-05 21:21:23.777269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.896 [2024-12-05 21:21:23.777276] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.896 [2024-12-05 21:21:23.789517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.896 [2024-12-05 21:21:23.789923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-05 21:21:23.789939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.896 [2024-12-05 21:21:23.789946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.896 [2024-12-05 21:21:23.790119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.896 [2024-12-05 21:21:23.790293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.896 [2024-12-05 21:21:23.790301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.896 [2024-12-05 21:21:23.790311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.896 [2024-12-05 21:21:23.790318] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.896 [2024-12-05 21:21:23.802553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.896 [2024-12-05 21:21:23.802963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-05 21:21:23.802979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.896 [2024-12-05 21:21:23.802986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.896 [2024-12-05 21:21:23.803159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.896 [2024-12-05 21:21:23.803334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.896 [2024-12-05 21:21:23.803342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.896 [2024-12-05 21:21:23.803350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.896 [2024-12-05 21:21:23.803356] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.896 [2024-12-05 21:21:23.815591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.896 [2024-12-05 21:21:23.815997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-05 21:21:23.816014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.897 [2024-12-05 21:21:23.816021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.897 [2024-12-05 21:21:23.816194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.897 [2024-12-05 21:21:23.816371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.897 [2024-12-05 21:21:23.816380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.897 [2024-12-05 21:21:23.816387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.897 [2024-12-05 21:21:23.816393] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.897 5047.83 IOPS, 19.72 MiB/s [2024-12-05T20:21:24.005Z] [2024-12-05 21:21:23.829918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.897 [2024-12-05 21:21:23.830327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-05 21:21:23.830344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.897 [2024-12-05 21:21:23.830351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.897 [2024-12-05 21:21:23.830533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.897 [2024-12-05 21:21:23.830711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.897 [2024-12-05 21:21:23.830720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.897 [2024-12-05 21:21:23.830727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.897 [2024-12-05 21:21:23.830733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.897 [2024-12-05 21:21:23.842960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.897 [2024-12-05 21:21:23.843377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-05 21:21:23.843393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.897 [2024-12-05 21:21:23.843400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.897 [2024-12-05 21:21:23.843573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.897 [2024-12-05 21:21:23.843747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.897 [2024-12-05 21:21:23.843755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.897 [2024-12-05 21:21:23.843762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.897 [2024-12-05 21:21:23.843768] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.897 [2024-12-05 21:21:23.855984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.897 [2024-12-05 21:21:23.856314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-05 21:21:23.856330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.897 [2024-12-05 21:21:23.856337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.897 [2024-12-05 21:21:23.856515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.897 [2024-12-05 21:21:23.856688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.897 [2024-12-05 21:21:23.856696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.897 [2024-12-05 21:21:23.856703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.897 [2024-12-05 21:21:23.856709] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.897 [2024-12-05 21:21:23.869111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.897 [2024-12-05 21:21:23.869444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-05 21:21:23.869460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.897 [2024-12-05 21:21:23.869467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.897 [2024-12-05 21:21:23.869640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.897 [2024-12-05 21:21:23.869813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.897 [2024-12-05 21:21:23.869821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.897 [2024-12-05 21:21:23.869827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.897 [2024-12-05 21:21:23.869833] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.897 [2024-12-05 21:21:23.882228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.897 [2024-12-05 21:21:23.882618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-05 21:21:23.882635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.897 [2024-12-05 21:21:23.882646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.897 [2024-12-05 21:21:23.882819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.897 [2024-12-05 21:21:23.882992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.897 [2024-12-05 21:21:23.883000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.897 [2024-12-05 21:21:23.883007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.897 [2024-12-05 21:21:23.883013] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.897 [2024-12-05 21:21:23.895258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.897 [2024-12-05 21:21:23.895592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-05 21:21:23.895608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.897 [2024-12-05 21:21:23.895616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.897 [2024-12-05 21:21:23.895788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.897 [2024-12-05 21:21:23.895961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.897 [2024-12-05 21:21:23.895969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.897 [2024-12-05 21:21:23.895976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.897 [2024-12-05 21:21:23.895982] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.897 [2024-12-05 21:21:23.908266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.897 [2024-12-05 21:21:23.908656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-05 21:21:23.908673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.897 [2024-12-05 21:21:23.908681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.897 [2024-12-05 21:21:23.908854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.897 [2024-12-05 21:21:23.909028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.897 [2024-12-05 21:21:23.909036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.897 [2024-12-05 21:21:23.909043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.897 [2024-12-05 21:21:23.909050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.897 [2024-12-05 21:21:23.921284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.897 [2024-12-05 21:21:23.921697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-05 21:21:23.921713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.897 [2024-12-05 21:21:23.921721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.897 [2024-12-05 21:21:23.921895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.897 [2024-12-05 21:21:23.922071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.897 [2024-12-05 21:21:23.922080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.897 [2024-12-05 21:21:23.922087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.897 [2024-12-05 21:21:23.922093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.897 [2024-12-05 21:21:23.934313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.897 [2024-12-05 21:21:23.934731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-05 21:21:23.934748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.897 [2024-12-05 21:21:23.934755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.897 [2024-12-05 21:21:23.934928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.897 [2024-12-05 21:21:23.935102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.897 [2024-12-05 21:21:23.935111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.897 [2024-12-05 21:21:23.935117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.897 [2024-12-05 21:21:23.935123] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.897 [2024-12-05 21:21:23.947327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.897 [2024-12-05 21:21:23.947744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-05 21:21:23.947759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.897 [2024-12-05 21:21:23.947767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.897 [2024-12-05 21:21:23.947940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.897 [2024-12-05 21:21:23.948112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.897 [2024-12-05 21:21:23.948121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.897 [2024-12-05 21:21:23.948127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.897 [2024-12-05 21:21:23.948133] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.897 [2024-12-05 21:21:23.960364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.897 [2024-12-05 21:21:23.960775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-05 21:21:23.960792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.897 [2024-12-05 21:21:23.960799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.897 [2024-12-05 21:21:23.960971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.897 [2024-12-05 21:21:23.961145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.897 [2024-12-05 21:21:23.961153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.897 [2024-12-05 21:21:23.961159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.897 [2024-12-05 21:21:23.961169] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.897 [2024-12-05 21:21:23.973398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.897 [2024-12-05 21:21:23.973804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-05 21:21:23.973820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.897 [2024-12-05 21:21:23.973827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.897 [2024-12-05 21:21:23.974000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.897 [2024-12-05 21:21:23.974173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.897 [2024-12-05 21:21:23.974182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.897 [2024-12-05 21:21:23.974188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.897 [2024-12-05 21:21:23.974194] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.897 [2024-12-05 21:21:23.986416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.897 [2024-12-05 21:21:23.986823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-05 21:21:23.986839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.897 [2024-12-05 21:21:23.986846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.897 [2024-12-05 21:21:23.987019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.897 [2024-12-05 21:21:23.987192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.897 [2024-12-05 21:21:23.987200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.897 [2024-12-05 21:21:23.987206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.897 [2024-12-05 21:21:23.987212] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.897 [2024-12-05 21:21:23.999439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.897 [2024-12-05 21:21:23.999846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-05 21:21:23.999862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:15.897 [2024-12-05 21:21:23.999869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:15.897 [2024-12-05 21:21:24.000043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:15.897 [2024-12-05 21:21:24.000217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.897 [2024-12-05 21:21:24.000226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.897 [2024-12-05 21:21:24.000232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.897 [2024-12-05 21:21:24.000238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.157 [2024-12-05 21:21:24.012481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.157 [2024-12-05 21:21:24.012893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.157 [2024-12-05 21:21:24.012909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.157 [2024-12-05 21:21:24.012916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.157 [2024-12-05 21:21:24.013088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.157 [2024-12-05 21:21:24.013264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.157 [2024-12-05 21:21:24.013272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.157 [2024-12-05 21:21:24.013278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.157 [2024-12-05 21:21:24.013285] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.157 [2024-12-05 21:21:24.025526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.157 [2024-12-05 21:21:24.025934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.157 [2024-12-05 21:21:24.025950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.157 [2024-12-05 21:21:24.025957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.157 [2024-12-05 21:21:24.026129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.157 [2024-12-05 21:21:24.026303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.157 [2024-12-05 21:21:24.026311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.157 [2024-12-05 21:21:24.026318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.157 [2024-12-05 21:21:24.026324] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.157 [2024-12-05 21:21:24.038573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.157 [2024-12-05 21:21:24.038980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.157 [2024-12-05 21:21:24.038996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.157 [2024-12-05 21:21:24.039003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.157 [2024-12-05 21:21:24.039175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.157 [2024-12-05 21:21:24.039349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.157 [2024-12-05 21:21:24.039357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.157 [2024-12-05 21:21:24.039364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.157 [2024-12-05 21:21:24.039379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.157 [2024-12-05 21:21:24.051594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.157 [2024-12-05 21:21:24.051976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.157 [2024-12-05 21:21:24.051992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.157 [2024-12-05 21:21:24.052002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.157 [2024-12-05 21:21:24.052176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.157 [2024-12-05 21:21:24.052350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.157 [2024-12-05 21:21:24.052358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.157 [2024-12-05 21:21:24.052364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.157 [2024-12-05 21:21:24.052377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.157 [2024-12-05 21:21:24.064600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.157 [2024-12-05 21:21:24.064985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.157 [2024-12-05 21:21:24.065002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.157 [2024-12-05 21:21:24.065009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.157 [2024-12-05 21:21:24.065182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.157 [2024-12-05 21:21:24.065355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.157 [2024-12-05 21:21:24.065364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.157 [2024-12-05 21:21:24.065375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.157 [2024-12-05 21:21:24.065381] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.157 [2024-12-05 21:21:24.077611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.157 [2024-12-05 21:21:24.078015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.157 [2024-12-05 21:21:24.078031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.157 [2024-12-05 21:21:24.078038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.157 [2024-12-05 21:21:24.078211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.157 [2024-12-05 21:21:24.078391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.157 [2024-12-05 21:21:24.078400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.157 [2024-12-05 21:21:24.078407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.157 [2024-12-05 21:21:24.078413] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.157 [2024-12-05 21:21:24.090630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.157 [2024-12-05 21:21:24.091037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.157 [2024-12-05 21:21:24.091052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.157 [2024-12-05 21:21:24.091059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.157 [2024-12-05 21:21:24.091232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.157 [2024-12-05 21:21:24.091411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.157 [2024-12-05 21:21:24.091423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.157 [2024-12-05 21:21:24.091430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.157 [2024-12-05 21:21:24.091436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.157 [2024-12-05 21:21:24.103665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.157 [2024-12-05 21:21:24.104082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.157 [2024-12-05 21:21:24.104098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.157 [2024-12-05 21:21:24.104106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.157 [2024-12-05 21:21:24.104279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.157 [2024-12-05 21:21:24.104457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.157 [2024-12-05 21:21:24.104465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.157 [2024-12-05 21:21:24.104472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.157 [2024-12-05 21:21:24.104478] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.157 [2024-12-05 21:21:24.116686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.157 [2024-12-05 21:21:24.117094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.157 [2024-12-05 21:21:24.117109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.158 [2024-12-05 21:21:24.117116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.158 [2024-12-05 21:21:24.117289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.158 [2024-12-05 21:21:24.117467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.158 [2024-12-05 21:21:24.117476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.158 [2024-12-05 21:21:24.117482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.158 [2024-12-05 21:21:24.117489] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.158 [2024-12-05 21:21:24.129729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.158 [2024-12-05 21:21:24.132559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.158 [2024-12-05 21:21:24.132577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.158 [2024-12-05 21:21:24.132586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.158 [2024-12-05 21:21:24.132760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.158 [2024-12-05 21:21:24.132932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.158 [2024-12-05 21:21:24.132940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.158 [2024-12-05 21:21:24.132946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.158 [2024-12-05 21:21:24.132955] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.158 [2024-12-05 21:21:24.142761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.158 [2024-12-05 21:21:24.143183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.158 [2024-12-05 21:21:24.143200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.158 [2024-12-05 21:21:24.143207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.158 [2024-12-05 21:21:24.143388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.158 [2024-12-05 21:21:24.143562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.158 [2024-12-05 21:21:24.143571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.158 [2024-12-05 21:21:24.143577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.158 [2024-12-05 21:21:24.143583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.158 [2024-12-05 21:21:24.155857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.158 [2024-12-05 21:21:24.156260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.158 [2024-12-05 21:21:24.156276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.158 [2024-12-05 21:21:24.156283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.158 [2024-12-05 21:21:24.156461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.158 [2024-12-05 21:21:24.156634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.158 [2024-12-05 21:21:24.156642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.158 [2024-12-05 21:21:24.156649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.158 [2024-12-05 21:21:24.156655] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.158 [2024-12-05 21:21:24.168897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.158 [2024-12-05 21:21:24.169303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.158 [2024-12-05 21:21:24.169320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.158 [2024-12-05 21:21:24.169327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.158 [2024-12-05 21:21:24.169509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.158 [2024-12-05 21:21:24.169684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.158 [2024-12-05 21:21:24.169693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.158 [2024-12-05 21:21:24.169699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.158 [2024-12-05 21:21:24.169705] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.158 [2024-12-05 21:21:24.181955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.158 [2024-12-05 21:21:24.182363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.158 [2024-12-05 21:21:24.182385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.158 [2024-12-05 21:21:24.182392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.158 [2024-12-05 21:21:24.182564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.158 [2024-12-05 21:21:24.182737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.158 [2024-12-05 21:21:24.182746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.158 [2024-12-05 21:21:24.182752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.158 [2024-12-05 21:21:24.182759] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.158 [2024-12-05 21:21:24.195009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.158 [2024-12-05 21:21:24.195372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.158 [2024-12-05 21:21:24.195389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.158 [2024-12-05 21:21:24.195396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.158 [2024-12-05 21:21:24.195568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.158 [2024-12-05 21:21:24.195741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.158 [2024-12-05 21:21:24.195749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.158 [2024-12-05 21:21:24.195756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.158 [2024-12-05 21:21:24.195763] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.158 [2024-12-05 21:21:24.207996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.158 [2024-12-05 21:21:24.208428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.158 [2024-12-05 21:21:24.208445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.158 [2024-12-05 21:21:24.208452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.158 [2024-12-05 21:21:24.208624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.158 [2024-12-05 21:21:24.208799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.158 [2024-12-05 21:21:24.208808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.158 [2024-12-05 21:21:24.208817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.158 [2024-12-05 21:21:24.208824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.158 [2024-12-05 21:21:24.221083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.158 [2024-12-05 21:21:24.221375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.158 [2024-12-05 21:21:24.221392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.158 [2024-12-05 21:21:24.221399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.158 [2024-12-05 21:21:24.221575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.158 [2024-12-05 21:21:24.221750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.158 [2024-12-05 21:21:24.221758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.158 [2024-12-05 21:21:24.221764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.158 [2024-12-05 21:21:24.221771] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.158 [2024-12-05 21:21:24.234181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.158 [2024-12-05 21:21:24.234597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.158 [2024-12-05 21:21:24.234614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.158 [2024-12-05 21:21:24.234621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.158 [2024-12-05 21:21:24.234795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.158 [2024-12-05 21:21:24.234967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.158 [2024-12-05 21:21:24.234976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.158 [2024-12-05 21:21:24.234982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.158 [2024-12-05 21:21:24.234988] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.158 [2024-12-05 21:21:24.247245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.158 [2024-12-05 21:21:24.247525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.158 [2024-12-05 21:21:24.247542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.158 [2024-12-05 21:21:24.247549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.158 [2024-12-05 21:21:24.247722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.158 [2024-12-05 21:21:24.247896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.158 [2024-12-05 21:21:24.247904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.158 [2024-12-05 21:21:24.247911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.158 [2024-12-05 21:21:24.247917] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.158 [2024-12-05 21:21:24.260337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.158 [2024-12-05 21:21:24.260656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.158 [2024-12-05 21:21:24.260673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.158 [2024-12-05 21:21:24.260680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.158 [2024-12-05 21:21:24.260854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.158 [2024-12-05 21:21:24.261027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.158 [2024-12-05 21:21:24.261038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.158 [2024-12-05 21:21:24.261045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.158 [2024-12-05 21:21:24.261051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.418 [2024-12-05 21:21:24.273459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.418 [2024-12-05 21:21:24.273813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.418 [2024-12-05 21:21:24.273829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.418 [2024-12-05 21:21:24.273836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.418 [2024-12-05 21:21:24.274009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.418 [2024-12-05 21:21:24.274182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.418 [2024-12-05 21:21:24.274190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.418 [2024-12-05 21:21:24.274197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.418 [2024-12-05 21:21:24.274203] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.418 [2024-12-05 21:21:24.286447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.418 [2024-12-05 21:21:24.286795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.418 [2024-12-05 21:21:24.286812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.418 [2024-12-05 21:21:24.286819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.418 [2024-12-05 21:21:24.286991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.418 [2024-12-05 21:21:24.287165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.418 [2024-12-05 21:21:24.287173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.418 [2024-12-05 21:21:24.287180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.418 [2024-12-05 21:21:24.287186] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.418 [2024-12-05 21:21:24.299425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.418 [2024-12-05 21:21:24.299727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.418 [2024-12-05 21:21:24.299743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.418 [2024-12-05 21:21:24.299751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.418 [2024-12-05 21:21:24.299924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.418 [2024-12-05 21:21:24.300097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.418 [2024-12-05 21:21:24.300106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.418 [2024-12-05 21:21:24.300113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.418 [2024-12-05 21:21:24.300122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.418 [2024-12-05 21:21:24.312542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.418 [2024-12-05 21:21:24.312891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.418 [2024-12-05 21:21:24.312907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.418 [2024-12-05 21:21:24.312914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.418 [2024-12-05 21:21:24.313087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.418 [2024-12-05 21:21:24.313260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.418 [2024-12-05 21:21:24.313269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.418 [2024-12-05 21:21:24.313275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.418 [2024-12-05 21:21:24.313282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.418 [2024-12-05 21:21:24.325538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.418 [2024-12-05 21:21:24.325898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.418 [2024-12-05 21:21:24.325915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.418 [2024-12-05 21:21:24.325922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.418 [2024-12-05 21:21:24.326094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.418 [2024-12-05 21:21:24.326269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.418 [2024-12-05 21:21:24.326279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.418 [2024-12-05 21:21:24.326287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.418 [2024-12-05 21:21:24.326294] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.418 [2024-12-05 21:21:24.338543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.418 [2024-12-05 21:21:24.338822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.418 [2024-12-05 21:21:24.338838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.418 [2024-12-05 21:21:24.338845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.418 [2024-12-05 21:21:24.339018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.418 [2024-12-05 21:21:24.339192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.419 [2024-12-05 21:21:24.339201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.419 [2024-12-05 21:21:24.339208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.419 [2024-12-05 21:21:24.339215] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.419 [2024-12-05 21:21:24.351624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.419 [2024-12-05 21:21:24.351914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.419 [2024-12-05 21:21:24.351933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.419 [2024-12-05 21:21:24.351941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.419 [2024-12-05 21:21:24.352115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.419 [2024-12-05 21:21:24.352289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.419 [2024-12-05 21:21:24.352298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.419 [2024-12-05 21:21:24.352304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.419 [2024-12-05 21:21:24.352310] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.419 [2024-12-05 21:21:24.364734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.419 [2024-12-05 21:21:24.364995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.419 [2024-12-05 21:21:24.365011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.419 [2024-12-05 21:21:24.365018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.419 [2024-12-05 21:21:24.365190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.419 [2024-12-05 21:21:24.365363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.419 [2024-12-05 21:21:24.365377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.419 [2024-12-05 21:21:24.365384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.419 [2024-12-05 21:21:24.365390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.419 [2024-12-05 21:21:24.377798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.419 [2024-12-05 21:21:24.378057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.419 [2024-12-05 21:21:24.378073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.419 [2024-12-05 21:21:24.378080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.419 [2024-12-05 21:21:24.378253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.419 [2024-12-05 21:21:24.378431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.419 [2024-12-05 21:21:24.378440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.419 [2024-12-05 21:21:24.378447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.419 [2024-12-05 21:21:24.378453] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.419 [2024-12-05 21:21:24.390851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.419 [2024-12-05 21:21:24.391189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.419 [2024-12-05 21:21:24.391205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.419 [2024-12-05 21:21:24.391213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.419 [2024-12-05 21:21:24.391394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.419 [2024-12-05 21:21:24.391567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.419 [2024-12-05 21:21:24.391575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.419 [2024-12-05 21:21:24.391582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.419 [2024-12-05 21:21:24.391588] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:16.419 [2024-12-05 21:21:24.403836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.419 [2024-12-05 21:21:24.404177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.419 [2024-12-05 21:21:24.404194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.419 [2024-12-05 21:21:24.404201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.419 [2024-12-05 21:21:24.404381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.419 [2024-12-05 21:21:24.404558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.419 [2024-12-05 21:21:24.404567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.419 [2024-12-05 21:21:24.404573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.419 [2024-12-05 21:21:24.404580] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.419 [2024-12-05 21:21:24.416829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.419 [2024-12-05 21:21:24.417116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.419 [2024-12-05 21:21:24.417132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.419 [2024-12-05 21:21:24.417140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.419 [2024-12-05 21:21:24.417313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.419 [2024-12-05 21:21:24.417491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.419 [2024-12-05 21:21:24.417500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.419 [2024-12-05 21:21:24.417506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.419 [2024-12-05 21:21:24.417513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.419 [2024-12-05 21:21:24.429930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.419 [2024-12-05 21:21:24.430220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.419 [2024-12-05 21:21:24.430237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.419 [2024-12-05 21:21:24.430248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.419 [2024-12-05 21:21:24.430427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.419 [2024-12-05 21:21:24.430603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.419 [2024-12-05 21:21:24.430612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.419 [2024-12-05 21:21:24.430619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.419 [2024-12-05 21:21:24.430625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:16.419 [2024-12-05 21:21:24.443052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.419 [2024-12-05 21:21:24.443330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.419 [2024-12-05 21:21:24.443347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.419 [2024-12-05 21:21:24.443354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.419 [2024-12-05 21:21:24.443533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.419 [2024-12-05 21:21:24.443706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.419 [2024-12-05 21:21:24.443715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.419 [2024-12-05 21:21:24.443721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.419 [2024-12-05 21:21:24.443728] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.419 [2024-12-05 21:21:24.445256] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:16.419 [2024-12-05 21:21:24.456141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.419 [2024-12-05 21:21:24.456423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.419 [2024-12-05 21:21:24.456441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.419 [2024-12-05 21:21:24.456448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.419 [2024-12-05 21:21:24.456622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.419 [2024-12-05 21:21:24.456795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.419 [2024-12-05 21:21:24.456804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.419 [2024-12-05 21:21:24.456815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.419 [2024-12-05 21:21:24.456822] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.419 [2024-12-05 21:21:24.469239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.419 [2024-12-05 21:21:24.469532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.419 [2024-12-05 21:21:24.469547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.419 [2024-12-05 21:21:24.469555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.419 [2024-12-05 21:21:24.469728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.419 [2024-12-05 21:21:24.469900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.419 [2024-12-05 21:21:24.469909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.419 [2024-12-05 21:21:24.469916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.419 [2024-12-05 21:21:24.469923] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.419 [2024-12-05 21:21:24.482338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.419 [2024-12-05 21:21:24.482631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.419 [2024-12-05 21:21:24.482647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.419 [2024-12-05 21:21:24.482655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.419 [2024-12-05 21:21:24.482828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.419 [2024-12-05 21:21:24.483002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.419 [2024-12-05 21:21:24.483010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.419 [2024-12-05 21:21:24.483017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.419 [2024-12-05 21:21:24.483023] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.419 Malloc0 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:16.419 [2024-12-05 21:21:24.495441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.419 [2024-12-05 21:21:24.495716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.419 [2024-12-05 21:21:24.495732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.419 [2024-12-05 21:21:24.495739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.419 [2024-12-05 21:21:24.495913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.419 [2024-12-05 21:21:24.496086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.419 [2024-12-05 21:21:24.496094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.419 [2024-12-05 21:21:24.496105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.419 [2024-12-05 21:21:24.496111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:16.419 [2024-12-05 21:21:24.508542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.419 [2024-12-05 21:21:24.508868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.419 [2024-12-05 21:21:24.508884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a6120 with addr=10.0.0.2, port=4420 00:28:16.419 [2024-12-05 21:21:24.508891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6120 is same with the state(6) to be set 00:28:16.419 [2024-12-05 21:21:24.509064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6120 (9): Bad file descriptor 00:28:16.419 [2024-12-05 21:21:24.509237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.419 [2024-12-05 21:21:24.509245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.419 [2024-12-05 21:21:24.509252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.419 [2024-12-05 21:21:24.509258] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.419 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.420 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:16.420 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.420 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:16.420 [2024-12-05 21:21:24.512927] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.420 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.420 21:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1464042 00:28:16.420 [2024-12-05 21:21:24.521688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.678 [2024-12-05 21:21:24.549673] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:17.869 4779.43 IOPS, 18.67 MiB/s [2024-12-05T20:21:26.911Z] 5631.38 IOPS, 22.00 MiB/s [2024-12-05T20:21:27.845Z] 6269.89 IOPS, 24.49 MiB/s [2024-12-05T20:21:29.220Z] 6798.60 IOPS, 26.56 MiB/s [2024-12-05T20:21:30.158Z] 7219.36 IOPS, 28.20 MiB/s [2024-12-05T20:21:31.092Z] 7574.92 IOPS, 29.59 MiB/s [2024-12-05T20:21:32.025Z] 7876.31 IOPS, 30.77 MiB/s [2024-12-05T20:21:32.961Z] 8132.86 IOPS, 31.77 MiB/s [2024-12-05T20:21:32.961Z] 8357.47 IOPS, 32.65 MiB/s 00:28:24.853 Latency(us) 00:28:24.853 [2024-12-05T20:21:32.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.853 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:24.853 Verification LBA range: start 0x0 length 0x4000 00:28:24.853 Nvme1n1 : 15.01 8358.30 32.65 13193.64 0.00 5919.47 427.15 16352.79 00:28:24.853 [2024-12-05T20:21:32.961Z] =================================================================================================================== 00:28:24.853 [2024-12-05T20:21:32.961Z] Total : 8358.30 32.65 13193.64 0.00 5919.47 427.15 16352.79 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:25.112 rmmod nvme_tcp 00:28:25.112 rmmod nvme_fabrics 00:28:25.112 rmmod nvme_keyring 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1464976 ']' 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1464976 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1464976 ']' 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1464976 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1464976 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1464976' 00:28:25.112 killing process with pid 1464976 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1464976 00:28:25.112 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1464976 00:28:25.371 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:25.371 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:25.371 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:25.371 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:25.371 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:25.371 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:25.371 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:25.371 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:25.371 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:25.371 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.371 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.371 21:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:27.904 00:28:27.904 real 0m26.056s 00:28:27.904 user 1m0.968s 00:28:27.904 sys 0m6.682s 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:27.904 ************************************ 00:28:27.904 END TEST nvmf_bdevperf 00:28:27.904 ************************************ 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.904 ************************************ 00:28:27.904 START TEST nvmf_target_disconnect 00:28:27.904 ************************************ 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:27.904 * Looking for test storage... 00:28:27.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:27.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.904 --rc genhtml_branch_coverage=1 00:28:27.904 --rc genhtml_function_coverage=1 00:28:27.904 --rc genhtml_legend=1 00:28:27.904 --rc geninfo_all_blocks=1 00:28:27.904 --rc geninfo_unexecuted_blocks=1 00:28:27.904 00:28:27.904 ' 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:27.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.904 --rc genhtml_branch_coverage=1 00:28:27.904 --rc genhtml_function_coverage=1 00:28:27.904 --rc genhtml_legend=1 00:28:27.904 --rc geninfo_all_blocks=1 00:28:27.904 --rc geninfo_unexecuted_blocks=1 00:28:27.904 00:28:27.904 ' 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:27.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.904 --rc genhtml_branch_coverage=1 00:28:27.904 --rc genhtml_function_coverage=1 00:28:27.904 --rc genhtml_legend=1 00:28:27.904 --rc geninfo_all_blocks=1 00:28:27.904 --rc geninfo_unexecuted_blocks=1 00:28:27.904 00:28:27.904 ' 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:27.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.904 --rc genhtml_branch_coverage=1 00:28:27.904 --rc genhtml_function_coverage=1 00:28:27.904 --rc genhtml_legend=1 00:28:27.904 --rc geninfo_all_blocks=1 00:28:27.904 --rc geninfo_unexecuted_blocks=1 00:28:27.904 00:28:27.904 ' 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.904 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:27.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:27.905 21:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:34.475 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.475 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:34.475 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:34.476 Found net devices under 0000:86:00.0: cvl_0_0 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:34.476 Found net devices under 0000:86:00.1: cvl_0_1 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:34.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:28:34.476 00:28:34.476 --- 10.0.0.2 ping statistics --- 00:28:34.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.476 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:28:34.476 00:28:34.476 --- 10.0.0.1 ping statistics --- 00:28:34.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.476 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:34.476 ************************************ 00:28:34.476 START TEST nvmf_target_disconnect_tc1 00:28:34.476 ************************************ 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:34.476 [2024-12-05 21:21:41.749972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.476 [2024-12-05 21:21:41.750030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d05ac0 with addr=10.0.0.2, port=4420 00:28:34.476 [2024-12-05 21:21:41.750053] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:34.476 [2024-12-05 21:21:41.750062] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:34.476 [2024-12-05 21:21:41.750069] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:34.476 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:34.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:34.476 Initializing NVMe Controllers 00:28:34.476 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:34.477 00:28:34.477 real 0m0.107s 00:28:34.477 user 0m0.049s 00:28:34.477 sys 0m0.053s 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:34.477 ************************************ 00:28:34.477 END TEST nvmf_target_disconnect_tc1 00:28:34.477 ************************************ 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:34.477 ************************************ 00:28:34.477 START TEST nvmf_target_disconnect_tc2 00:28:34.477 ************************************ 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1470104 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1470104 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1470104 ']' 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.477 21:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.477 [2024-12-05 21:21:41.896940] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:28:34.477 [2024-12-05 21:21:41.896985] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.477 [2024-12-05 21:21:41.977841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.477 [2024-12-05 21:21:42.020034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.477 [2024-12-05 21:21:42.020071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.477 [2024-12-05 21:21:42.020077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.477 [2024-12-05 21:21:42.020083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.477 [2024-12-05 21:21:42.020088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.477 [2024-12-05 21:21:42.021591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:34.477 [2024-12-05 21:21:42.021701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:34.477 [2024-12-05 21:21:42.021808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:34.477 [2024-12-05 21:21:42.021810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:34.736 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.736 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:34.736 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:34.736 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:34.736 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.736 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.736 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:34.736 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.736 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.736 Malloc0 00:28:34.736 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.736 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:34.736 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.736 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.736 [2024-12-05 21:21:42.798649] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.737 [2024-12-05 21:21:42.827713] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1470180 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:34.737 21:21:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:37.296 21:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1470104 00:28:37.296 21:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 [2024-12-05 21:21:44.862748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 [2024-12-05 21:21:44.862953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 [2024-12-05 21:21:44.863144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Write completed with error (sct=0, sc=8) 00:28:37.296 starting I/O failed 00:28:37.296 Read completed with error (sct=0, sc=8) 00:28:37.297 starting I/O failed 00:28:37.297 Write completed with error (sct=0, sc=8) 00:28:37.297 starting I/O failed 00:28:37.297 Read completed with error (sct=0, sc=8) 00:28:37.297 starting I/O failed 00:28:37.297 Write completed with error (sct=0, sc=8) 00:28:37.297 starting I/O failed 00:28:37.297 Read completed with error (sct=0, sc=8) 00:28:37.297 starting I/O failed 00:28:37.297 Read completed with error (sct=0, sc=8) 00:28:37.297 starting I/O failed 00:28:37.297 Read completed with error (sct=0, sc=8) 00:28:37.297 starting I/O failed 00:28:37.297 Read completed with error (sct=0, sc=8) 00:28:37.297 starting I/O failed 00:28:37.297 Write completed with error (sct=0, sc=8) 00:28:37.297 starting I/O failed 00:28:37.297 Read completed with error (sct=0, sc=8) 00:28:37.297 starting I/O failed 00:28:37.297 Read completed with error (sct=0, sc=8) 00:28:37.297 starting I/O failed 00:28:37.297 Read completed with error (sct=0, sc=8) 00:28:37.297 starting I/O failed 00:28:37.297 Write completed with error (sct=0, sc=8) 00:28:37.297 starting I/O failed 00:28:37.297 Write completed with error (sct=0, sc=8) 00:28:37.297 starting I/O failed 00:28:37.297 Read completed with error (sct=0, sc=8) 00:28:37.297 starting I/O failed 00:28:37.297 Read completed with error (sct=0, sc=8) 00:28:37.297 starting I/O failed 00:28:37.297 [2024-12-05 21:21:44.863344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.297 [2024-12-05 21:21:44.863528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.863553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.863702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.863712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.863800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.863810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.863945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.863955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.864177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.864187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.864388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.864399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.864548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.864559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.864706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.864716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.864797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.864806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.864952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.864962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.865054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.865063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.865152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.865161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.865241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.865250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.865391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.865402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.865500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.865521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.865621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.865641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.865713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.865723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.865802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.865812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.865890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.865899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.865972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.865982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.866037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.866047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.866121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.866130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.866272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.866281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.866480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.866491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.866640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.866650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.866708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.866718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.866795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.866804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.866871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.866881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.867018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.867028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.867100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.867110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.867250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.867260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.867447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.867459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.297 [2024-12-05 21:21:44.867530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.297 [2024-12-05 21:21:44.867539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.297 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.867635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.867645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.867786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.867796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.867853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.867863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.867943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.867953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.868022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.868031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.868091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.868100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.868159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.868168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.868299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.868308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.868380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.868392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.868525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.868534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.868687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.868697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.868759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.868768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.868903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.868913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.869004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.869013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.869141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.869151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.869225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.869234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.869292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.869301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.869353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.869362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.869443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.869452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.869596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.869606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.869743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.869752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.869884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.869894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.869953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.869962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.870090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.870099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.870180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.870189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.870244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.870254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.870312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.870321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.870396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.870406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.870536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.870545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.870615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.870624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.870710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.870733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.870800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.870812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.870887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.870898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.870969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.870980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.871041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.871052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.871146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.871158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.871306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.871317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.871390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.298 [2024-12-05 21:21:44.871402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.298 qpair failed and we were unable to recover it. 00:28:37.298 [2024-12-05 21:21:44.871480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.871491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.871701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.871712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.871805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.871816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.871892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.871904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.871971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.871982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.872112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.872125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.872183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.872194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.872251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.872263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.872405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.872418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.872628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.872640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.872721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.872733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.872819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.872845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.872948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.872964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.873160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.873177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.873260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.873273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.873404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.873417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.873489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.873500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.873707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.873719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.873857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.873870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.873951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.873962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.874029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.874040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.874121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.874134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.874263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.874274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.874346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.874358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.874437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.874449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.874516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.874527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.874669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.874680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.874757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.874769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.874851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.874863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.874943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.874955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.875013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.875024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.875169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.875181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.875250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.875262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.875397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.875410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.875566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.875579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.875731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.875744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.875882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.875894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.876077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.876090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.299 qpair failed and we were unable to recover it. 00:28:37.299 [2024-12-05 21:21:44.876177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.299 [2024-12-05 21:21:44.876191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.876327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.876340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.876563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.876576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.876727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.876739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.876885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.876897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.876975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.876986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.877131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.877143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.877274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.877286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.877507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.877520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.877600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.877612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.877698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.877709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.877839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.877852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.877981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.877994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.878072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.878083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.878179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.878191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.878332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.878345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.878499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.878513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.878647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.878659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.878742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.878753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.878825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.878836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.879033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.879046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.879108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.879120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.879198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.879209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.879345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.879357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.879507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.879520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.879610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.879621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.879751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.879764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.879843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.879854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.880057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.880070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.880209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.880221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.880305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.880316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.880552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.880566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.880694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.880706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.880793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.300 [2024-12-05 21:21:44.880804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.300 qpair failed and we were unable to recover it. 00:28:37.300 [2024-12-05 21:21:44.880877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.880892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.880992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.881008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.881151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.881167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.881309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.881325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.881462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.881479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.881619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.881635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.881717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.881733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.881849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.881869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.881949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.881971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.882063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.882085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.882246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.882264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.882348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.882372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.882456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.882474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.882699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.882717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.882805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.882823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.882926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.882943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.883081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.883099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.883242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.883260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.883424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.883456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.883637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.883668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.883841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.883881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.884141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.884159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.884379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.884397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.884631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.884650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.884817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.884835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.884987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.885005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.885218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.885236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.885467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.885486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.885705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.885723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.885888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.885906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.886006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.886024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.886128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.886146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.886220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.886238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.886325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.886342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.886449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.886467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.886556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.886574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.886727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.886746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.301 [2024-12-05 21:21:44.886890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.301 [2024-12-05 21:21:44.886908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.301 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.886992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.887010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.887102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.887120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.887199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.887217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.887300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.887318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.887544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.887562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.887639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.887656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.887808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.887826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.887982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.887999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.888088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.888106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.888204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.888223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.888323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.888344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.888546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.888565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.888663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.888680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.888824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.888841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.888924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.888940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.889079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.889095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.889231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.889247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.889318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.889334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.889478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.889495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.889636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.889652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.889723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.889739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.889903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.889919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.890027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.890044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.890185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.890201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.890352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.890409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.890524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.890555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.890667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.890698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.890812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.890843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.891102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.891133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.891258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.891289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.891393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.891421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.891530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.891554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.891780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.891811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.892064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.892095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.892333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.892364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.892641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.892674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.892843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.892872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.893074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.302 [2024-12-05 21:21:44.893105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.302 qpair failed and we were unable to recover it. 00:28:37.302 [2024-12-05 21:21:44.893312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.893343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.893538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.893570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.893831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.893862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.894044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.894075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.894346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.894389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.894630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.894661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.894901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.894932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.895035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.895066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.895282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.895314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.895589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.895622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.895744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.895775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.896063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.896094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.896213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.896246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.896498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.896550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.896726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.896756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.896946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.896977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.897149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.897173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.897296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.897320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.897485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.897510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.897598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.897622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.897850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.897874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.897993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.898017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.898110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.898134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.898389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.898422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.898524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.898556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.898812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.898844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.898958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.898982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.899151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.899176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.899421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.899453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.899629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.899660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.899873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.899904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.900145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.900170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.900325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.900349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.900613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.900645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.900849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.900880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.901012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.901044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.901244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.901275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.901483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.303 [2024-12-05 21:21:44.901516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.303 qpair failed and we were unable to recover it. 00:28:37.303 [2024-12-05 21:21:44.901702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.901740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.901925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.901956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.902146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.902178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.902435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.902468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.902728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.902758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.902965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.902997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.903195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.903226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.903473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.903506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.903690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.903720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.903978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.904010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.904285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.904316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.904444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.904477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.904605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.904636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.904875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.904906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.905111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.905143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.905322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.905354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.905555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.905587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.905870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.905901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.906079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.906110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.906294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.906325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.906511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.906544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.906790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.906821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.907003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.907034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.907320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.907351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.907551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.907584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.907870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.907900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.908082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.908113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.908312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.908345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.908470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.908502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.908768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.908800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.908987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.909017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.909284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.909316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.909451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.909484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.909678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.909709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.909998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.910030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.910300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.910331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.910549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.910583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.910843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.304 [2024-12-05 21:21:44.910875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.304 qpair failed and we were unable to recover it. 00:28:37.304 [2024-12-05 21:21:44.911000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.911030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.911216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.911248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.911421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.911460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.911588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.911621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.911747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.911778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.911896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.911927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.912039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.912071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.912258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.912290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.912428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.912461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.912728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.912759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.912955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.912987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.913114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.913145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.913338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.913376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.913518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.913551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.913737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.913767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.913985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.914016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.914197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.914229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.914330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.914362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.914619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.914651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.914891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.914923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.915111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.915143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.915332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.915364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.915593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.915626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.915888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.915920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.916159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.916191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.916452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.916485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.916725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.916757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.916929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.916960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.917092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.917124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.917395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.917429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.917620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.917652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.917841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.917873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.917992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.918024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.918210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.918241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.305 qpair failed and we were unable to recover it. 00:28:37.305 [2024-12-05 21:21:44.918512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.305 [2024-12-05 21:21:44.918545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.918713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.918745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.918986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.919018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.919208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.919239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.919503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.919536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.919657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.919689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.919869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.919901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.920090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.920120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.920356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.920405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.920587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.920619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.920810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.920841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.921041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.921073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.921328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.921360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.921483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.921515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.921621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.921652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.921759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.921791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.921993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.922024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.922217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.922248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.922392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.922425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.922664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.922695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.922796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.922828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.923001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.923032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.923318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.923351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.923560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.923592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.923769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.923800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.923920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.923952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.924080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.924112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.924386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.924419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.924541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.924573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.924839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.924870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.925057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.925089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.925263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.925295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.925479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.925512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.925776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.925808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.926074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.926105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.926250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.926282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.926545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.926578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.926694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.926726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.306 [2024-12-05 21:21:44.926909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.306 [2024-12-05 21:21:44.926940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.306 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.927118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.927149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.927263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.927294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.927532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.927564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.927748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.927780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.928049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.928080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.928263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.928295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.928493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.928526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.928649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.928680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.928865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.928897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.929017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.929055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.929230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.929261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.929358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.929401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.929665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.929697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.929888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.929919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.930106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.930137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.930309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.930340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.930532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.930564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.930697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.930729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.930939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.930970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.931086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.931117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.931303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.931335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.931546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.931578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.931768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.931799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.932067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.932099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.932274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.932306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.932504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.932537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.932672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.932704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.932887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.932920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.933033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.933065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.933306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.933338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.933614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.933646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.933911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.933942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.934187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.934219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.934463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.934497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.934628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.934660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.934784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.934816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.935011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.935043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.307 [2024-12-05 21:21:44.935233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.307 [2024-12-05 21:21:44.935265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.307 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.935448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.935480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.935674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.935706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.935894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.935925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.936139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.936171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.936356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.936397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.936665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.936698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.936883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.936915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.937017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.937049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.937165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.937196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.937434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.937467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.937733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.937765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.937974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.938012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.938140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.938171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.938418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.938451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.938644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.938676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.938805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.938837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.939099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.939131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.939249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.939280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.939464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.939497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.939747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.939779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.939964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.939994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.940164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.940195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.940314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.940346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.940568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.940601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.940777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.940809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.941001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.941033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.941218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.941249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.941490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.941523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.941639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.941671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.941931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.941962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.942225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.942256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.942393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.942426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.942672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.942703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.942891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.942922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.943106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.943138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.943258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.943288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.943475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.943507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.308 [2024-12-05 21:21:44.943701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.308 [2024-12-05 21:21:44.943733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.308 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.943959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.944030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.944343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.944391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.944597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.944630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.944764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.944795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.945001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.945033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.945229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.945261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.945494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.945526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.945642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.945673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.945782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.945815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.945990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.946021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.946279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.946311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.946588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.946621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.946756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.946788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.946918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.946968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.947216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.947247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.947380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.947413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.947588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.947619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.947798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.947829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.948121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.948152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.948255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.948285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.948585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.948618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.948806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.948837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.949048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.949080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.949276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.949308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.949432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.949466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.949656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.949688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.949803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.949834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.950047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.950080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.950184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.950216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.950472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.950505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.950699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.950731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.950906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.950937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.951120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.951152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.951269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.309 [2024-12-05 21:21:44.951300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.309 qpair failed and we were unable to recover it. 00:28:37.309 [2024-12-05 21:21:44.951501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.951533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.951774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.951806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.951923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.951954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.952123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.952155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.952298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.952330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.952593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.952626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.952821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.952853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.952971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.953003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.953243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.953275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.953381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.953414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.953595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.953627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.953731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.953763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.953975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.954006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.954198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.954230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.954505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.954538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.954662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.954694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.954904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.954935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.955122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.955154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.955399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.955432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.955545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.955582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.955770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.955802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.956041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.956072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.956260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.956291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.956485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.956518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.956628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.956660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.956900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.956931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.957042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.957074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.957260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.957291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.957537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.957569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.957836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.957867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.958061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.958094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.958298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.958329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.958590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.958623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.958765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.958798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.959075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.959106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.959221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.959253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.959443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.959476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.959652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.959683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.310 [2024-12-05 21:21:44.959883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.310 [2024-12-05 21:21:44.959915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.310 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.960104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.960135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.960345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.960384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.960591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.960622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.960750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.960782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.960912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.960944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.961159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.961191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.961395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.961427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.961634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.961666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.961833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.961864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.962070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.962102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.962271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.962303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.962570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.962603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.962864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.962895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.962997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.963029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.963223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.963255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.963390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.963423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.963669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.963700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.963886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.963918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.964208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.964240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.964343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.964383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.964569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.964607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.964798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.964829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.965123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.965154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.965364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.965404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.965645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.965677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.965873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.965904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.966117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.966149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.966335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.966376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.966636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.966668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.966858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.966889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.967010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.967042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.967330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.967362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.967547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.967579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.967766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.967798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.968006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.968039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.968264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.968295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.968487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.968520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.968696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.311 [2024-12-05 21:21:44.968733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.311 qpair failed and we were unable to recover it. 00:28:37.311 [2024-12-05 21:21:44.968984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.969016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.969268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.969299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.969427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.969460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.969728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.969759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.970054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.970086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.970274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.970305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.970493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.970526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.970763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.970794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.970965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.970996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.971271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.971342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.971568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.971604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.971782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.971815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.971924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.971955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.972061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.972093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.972358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.972400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.972584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.972615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.972812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.972843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.972959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.972990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.973197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.973229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.973444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.973476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.973688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.973720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.974004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.974035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.974215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.974256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.974395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.974427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.974639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.974671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.974919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.974951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.975167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.975199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.975322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.975354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.975550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.975581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.975765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.975796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.975927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.975960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.976223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.976254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.976518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.976550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.976729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.976760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.976944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.976975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.977189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.977221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.977434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.977466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.977675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.312 [2024-12-05 21:21:44.977707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.312 qpair failed and we were unable to recover it. 00:28:37.312 [2024-12-05 21:21:44.977829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.977861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.978048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.978079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.978315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.978346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.978475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.978508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.978754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.978785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.979051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.979083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.979365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.979408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.979597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.979629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.979812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.979843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.980086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.980118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.980387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.980419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.980615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.980651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.980793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.980824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.981005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.981036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.981319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.981350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.981623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.981654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.981854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.981886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.982096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.982127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.982402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.982435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.982675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.982706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.982875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.982907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.983145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.983176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.983387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.983419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.983671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.983703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.983897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.983933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.984177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.984208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.984472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.984505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.984638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.984671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.984849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.984881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.985059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.985090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.985275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.985306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.985548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.985580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.985758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.985788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.985909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.985941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.986123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.986154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.986338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.986379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.986513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.986544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.986647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.313 [2024-12-05 21:21:44.986679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.313 qpair failed and we were unable to recover it. 00:28:37.313 [2024-12-05 21:21:44.986856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.986888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.986992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.987023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.987159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.987191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.987453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.987485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.987726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.987758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.987942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.987973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.988156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.988187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.988358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.988400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.988531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.988562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.988744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.988775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.988908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.988940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.989042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.989074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.989243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.989274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.989473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.989509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.989628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.989660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.989832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.989863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.989998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.990029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.990219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.990249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.990434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.990467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.990728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.990761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.991023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.991054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.991238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.991270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.991470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.991502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.991628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.991660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.991772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.991802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.992010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.992042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.992212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.992242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.992510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.992542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.992782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.992814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.993004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.993034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.993224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.993255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.993457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.993489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.993663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.993694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.993867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.993898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.314 qpair failed and we were unable to recover it. 00:28:37.314 [2024-12-05 21:21:44.994029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.314 [2024-12-05 21:21:44.994061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.994255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.994286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.994498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.994531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.994795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.994827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.995068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.995099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.995218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.995248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.995398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.995431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.995695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.995727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.995907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.995937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.996066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.996098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.996292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.996324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.996506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.996539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.996781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.996813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.996916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.996947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.997225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.997256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.997448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.997480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.997721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.997752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.997922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.997954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.998173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.998203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.998463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.998501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.998767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.998800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.998988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.999019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.999208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.999239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.999502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.999534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:44.999807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:44.999839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:45.000111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:45.000143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:45.000262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:45.000294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:45.000400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:45.000439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:45.000654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:45.000688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:45.000875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:45.000907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:45.001102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:45.001134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:45.001311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:45.001343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:45.001617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:45.001648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:45.001784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:45.001817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:45.002002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:45.002033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:45.002269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:45.002300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:45.002413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:45.002445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:45.002705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:45.002737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.315 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-05 21:21:45.002988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.315 [2024-12-05 21:21:45.003019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.003193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.003225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.003413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.003445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.003689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.003721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.003931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.003961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.004094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.004126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.004297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.004328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.004521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.004553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.004679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.004710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.004897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.004929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.005175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.005206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.005331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.005362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.005504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.005535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.005716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.005748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.005953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.005985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.006152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.006183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.006399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.006433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.006672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.006704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.006875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.006906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.007113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.007144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.007349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.007392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.007518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.007554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.007673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.007704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.007891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.007922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.008041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.008072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.008324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.008356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.008606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.008638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.008754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.008787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.008959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.008991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.009191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.009223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.009411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.009444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.009625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.009659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.009899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.009930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.010101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.010133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.010303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.010334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.010497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.010530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.010714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.010745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.011008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.011041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.011210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.316 [2024-12-05 21:21:45.011241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-05 21:21:45.011479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.011512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.011723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.011757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.011938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.011970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.012213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.012245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.012434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.012467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.012597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.012629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.012811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.012842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.013030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.013062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.013257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.013287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.013532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.013565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.013746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.013777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.014022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.014053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.014257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.014289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.014410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.014445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.014614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.014645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.014906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.014938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.015110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.015141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.015320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.015351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.015626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.015657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.015789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.015821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.016095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.016126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.016388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.016421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.016607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.016644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.016832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.016863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.017081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.017112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.017324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.017355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.017628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.017661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.017843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.017874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.018065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.018096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.018286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.018317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.018534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.018567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.018698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.018730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.018904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.018935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.019124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.019155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.019343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.019385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.019508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.019539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.019748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.019780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.019886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.317 [2024-12-05 21:21:45.019916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-05 21:21:45.020103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.020134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.020272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.020305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.020508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.020540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.020723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.020754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.020938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.020968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.021232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.021263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.021468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.021499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.021745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.021778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.021962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.021994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.022130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.022162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.022341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.022381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.022600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.022632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.022819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.022849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.022976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.023007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.023115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.023146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.023322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.023354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.023570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.023600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.023856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.023887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.024081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.024111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.024350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.024390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.024609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.024639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.024843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.024874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.024992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.025023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.025220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.025251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.025434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.025472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.025644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.025676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.025861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.025890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.026058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.026091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.026218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.026250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.026486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.026519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.026694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.026724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.026844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.026874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.026990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.027021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.027190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.027220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.318 [2024-12-05 21:21:45.027509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.318 [2024-12-05 21:21:45.027542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.318 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.027731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.027761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.027961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.027991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.028167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.028198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.028384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.028416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.028654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.028685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.028812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.028844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.029020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.029052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.029263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.029293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.029464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.029496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.029681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.029713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.029885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.029916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.030025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.030056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.030232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.030262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.030535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.030567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.030704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.030736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.030877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.030910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.031176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.031208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.031328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.031359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.031535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.031565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.031901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.031932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.032055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.032086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.032325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.032357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.032557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.032587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.032797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.032829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.032952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.032983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.033176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.033207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.033406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.033439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.033611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.033643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.033901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.033932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.034117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.034154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.034341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.034389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.034603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.034635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.034899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.034931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.035131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.035163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.035348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.035389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.035673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.035704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.035945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.035976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.036234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.319 [2024-12-05 21:21:45.036265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.319 qpair failed and we were unable to recover it. 00:28:37.319 [2024-12-05 21:21:45.036451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.036484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.036663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.036694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.036879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.036911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.037085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.037115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.037306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.037336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.037605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.037637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.037927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.037959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.038097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.038128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.038260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.038292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.038550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.038582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.038717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.038750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.038860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.038890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.039127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.039158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.039359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.039400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.039526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.039557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.039823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.039853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.040022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.040054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.040297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.040328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.040458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.040491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.040619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.040650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.040755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.040788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.040966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.040997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.041249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.041280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.041404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.041435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.041562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.041594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.041852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.041884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.042138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.042170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.042302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.042335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.042584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.042616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.042721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.042753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.042885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.042915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.043100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.043137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.043321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.043352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.043557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.043590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.043698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.043729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.043998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.044031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.044219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.044250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.320 [2024-12-05 21:21:45.044514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.320 [2024-12-05 21:21:45.044546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.320 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.044661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.044693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.044873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.044903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.045025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.045056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.045239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.045270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.045389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.045420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.045593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.045623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.045807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.045839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.045958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.045991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.046232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.046264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.046401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.046434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.046630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.046664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.046836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.046867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.047079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.047110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.047282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.047312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.047519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.047551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.047658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.047689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.047877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.047909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.048093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.048123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.048301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.048331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.048525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.048558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.048700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.048732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.048857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.048888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.049086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.049116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.049301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.049333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.049451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.049484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.049607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.049638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.049903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.049935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.050123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.050154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.050364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.050417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.050618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.050650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.050928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.050959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.051197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.051228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.051499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.051532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.051774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.051812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.052010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.052041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.052170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.052203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.052383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.052416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.052532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.321 [2024-12-05 21:21:45.052563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.321 qpair failed and we were unable to recover it. 00:28:37.321 [2024-12-05 21:21:45.052681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.052712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.052905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.052936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.053052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.053082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.053289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.053320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.053564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.053598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.053709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.053740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.053930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.053963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.054098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.054129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.054241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.054273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.054466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.054499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.054674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.054708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.054943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.054975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.055215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.055246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.055361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.055402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.055508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.055539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.055676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.055706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.055826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.055856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.055972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.056004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.056266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.056297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.056478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.056511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.056619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.056650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.056778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.056810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.056989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bdb20 is same with the state(6) to be set 00:28:37.322 [2024-12-05 21:21:45.057340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.057429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.057566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.057602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.057793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.057826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.057935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.057967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.058233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.058266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.058505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.058539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.058668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.058702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.058839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.058870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.059113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.059146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.059313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.059345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.059525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.059558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.059732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.059764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.060032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.060065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.060207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.060239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.060363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.060405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.060647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.322 [2024-12-05 21:21:45.060678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.322 qpair failed and we were unable to recover it. 00:28:37.322 [2024-12-05 21:21:45.060788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.060821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.060945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.060978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.061096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.061128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.061265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.061297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.061472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.061506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.061693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.061726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.061897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.061929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.062168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.062200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.062391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.062426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.062621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.062654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.062790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.062830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.063005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.063036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.063211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.063244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.063414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.063447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.063735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.063768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.064018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.064050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.064247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.064279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.064538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.064570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.064757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.064790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.064923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.064955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.065191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.065224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.065406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.065439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.065726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.065759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.065876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.065907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.066026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.066059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.066241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.066273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.066395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.066428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.066602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.066635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.066740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.066772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.066900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.066933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.067133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.067166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.067338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.067381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.067550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.067582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.067703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.067735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.323 [2024-12-05 21:21:45.067852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.323 [2024-12-05 21:21:45.067884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.323 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.068051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.068084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.068343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.068382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.068496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.068527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.068713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.068746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.068862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.068894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.069138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.069171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.069342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.069382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.069621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.069653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.069838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.069869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.070003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.070035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.070293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.070326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.070451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.070486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.070706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.070738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.070919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.070952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.071082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.071113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.071249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.071288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.071406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.071439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.071678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.071712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.071923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.071955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.072142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.072174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.072344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.072406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.072514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.072546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.072734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.072765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.072955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.072987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.073103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.073135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.073280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.073312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.073506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.073539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.073756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.073789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.073965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.073996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.074243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.074275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.074448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.074481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.074669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.074701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.074940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.074972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.075218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.075251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.075425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.075458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.075592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.075625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.075861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.075893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.324 qpair failed and we were unable to recover it. 00:28:37.324 [2024-12-05 21:21:45.076020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.324 [2024-12-05 21:21:45.076052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.076174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.076206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.076386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.076420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.076606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.076638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.076747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.076779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.076969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.077002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.077120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.077152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.077392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.077425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.077551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.077584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.077843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.077874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.078059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.078093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.078208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.078240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.078478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.078511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.078779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.078812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.078919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.078951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.079076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.079108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.079238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.079270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.079474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.079507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.079743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.079782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.079956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.079988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.080091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.080124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.080345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.080396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.080683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.080716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.080887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.080919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.081111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.081143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.081338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.081377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.081505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.081538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.081680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.081712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.081816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.325 [2024-12-05 21:21:45.081847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.325 qpair failed and we were unable to recover it. 00:28:37.325 [2024-12-05 21:21:45.082022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.082054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.082225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.082256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.082431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.082463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.082652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.082684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.082946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.082978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.083242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.083273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.083473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.083505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.083676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.083709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.083900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.083932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.084046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.084078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.084201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.084232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.084485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.084518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.084725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.084756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.084880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.084912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.085085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.085117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.085327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.085359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.085626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.085658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.085830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.085861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.086048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.086079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.086219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.086251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.086558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.086590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.086846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.086877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.087078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.087109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.087233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.087264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.087524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.087557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.087792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.087824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.088006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.088038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.088133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.088165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.088351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.088391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.088505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.088542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.088671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.088703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.088918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.088950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.089113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.089145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.089347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.089387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.089578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.089610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.089733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.089764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.089881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.089913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.326 qpair failed and we were unable to recover it. 00:28:37.326 [2024-12-05 21:21:45.090034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.326 [2024-12-05 21:21:45.090065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.090306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.090337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.090639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.090710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.090852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.090889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.091063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.091096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.091287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.091318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.091472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.091507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.091705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.091740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.091877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.091916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.092123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.092157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.092349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.092403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.092622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.092656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.092844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.092879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.093011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.093043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.093156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.093195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.093397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.093441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.093575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.093605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.093804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.093834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.094076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.094107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.094387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.094422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.094606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.094637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.094821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.094852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.095043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.095075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.095249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.095280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.095400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.095431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.095695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.095727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.095913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.095943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.096208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.096241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.096423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.096454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.096673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.096704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.096890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.096920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.097113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.327 [2024-12-05 21:21:45.097145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.327 qpair failed and we were unable to recover it. 00:28:37.327 [2024-12-05 21:21:45.097397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.097436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.097660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.097691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.097934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.097966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.098092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.098124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.098387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.098420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.098686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.098717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.098908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.098940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.099177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.099209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.099446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.099477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.099728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.099760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.099873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.099904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.100078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.100108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.100398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.100430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.100562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.100593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.100839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.100870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.101070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.101102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.101231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.101263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.101507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.101540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.101725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.101756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.101996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.102028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.102305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.102335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.102488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.102521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.102695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.102725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.102900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.102930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.103166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.103199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.103387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.103420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.103535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.103567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.103770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.103802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.104064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.104096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.104308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.104340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.328 [2024-12-05 21:21:45.104622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.328 [2024-12-05 21:21:45.104654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.328 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.104847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.104879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.104995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.105026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.105301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.105333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.105511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.105543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.105800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.105831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.105966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.105997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.106170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.106201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.106472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.106504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.106746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.106778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.106947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.106984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.107104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.107137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.107386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.107418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.107529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.107561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.107804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.107837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.108029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.108060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.108163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.108193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.108381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.108414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.108621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.108653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.108777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.108808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.109067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.109098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.109238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.109270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.109508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.109540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.109718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.109750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.109860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.109893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.110133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.110164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.110349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.110402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.110519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.110551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.110814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.110847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.110961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.110992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.111229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.111259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.329 [2024-12-05 21:21:45.111503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.329 [2024-12-05 21:21:45.111536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.329 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.111639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.111670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.111772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.111803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.111911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.111943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.112122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.112154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.112282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.112314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.112450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.112481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.112669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.112701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.112886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.112918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.113089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.113120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.113375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.113408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.113589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.113622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.113806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.113838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.114008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.114039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.114278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.114310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.114437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.114469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.114710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.114742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.114980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.115011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.115133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.115164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.115340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.115388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.115632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.115663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.115863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.115894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.116075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.116106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.116284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.116317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.116530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.116563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.116815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.116846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.117039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.117071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.117243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.117274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.117523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.117557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.117806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.117838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.118013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.118045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.118153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.118185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.118329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.118362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.118553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.118585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.118694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.118726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.118980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.330 [2024-12-05 21:21:45.119013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.330 qpair failed and we were unable to recover it. 00:28:37.330 [2024-12-05 21:21:45.119265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.119296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.119497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.119529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.119738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.119771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.119951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.119983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.120100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.120133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.120349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.120393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.120526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.120556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.120737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.120767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.121003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.121036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.121230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.121261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.121447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.121480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.121671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.121702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.121884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.121915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.122103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.122135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.122401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.122434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.122690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.122724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.122836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.122867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.123002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.123035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.123148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.123181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.123350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.123391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.123522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.123554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.123789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.123821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.124054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.124084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.124269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.124306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.124439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.124472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.124607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.124638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.124898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.124930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.125057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.125089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.125214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.125245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.125437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.125470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.125590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.125621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.331 qpair failed and we were unable to recover it. 00:28:37.331 [2024-12-05 21:21:45.125822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.331 [2024-12-05 21:21:45.125853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.126030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.126062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.126240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.126273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.126479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.126512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.126720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.126753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.127019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.127051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.127230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.127262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.127457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.127492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.127611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.127643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.127879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.127911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.128088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.128121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.128223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.128256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.128431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.128465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.128583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.128615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.128788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.128820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.129063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.129096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.129284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.129316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.129441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.129474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.129692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.129723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.129854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.129886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.130058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.130090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.130307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.130339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.130478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.130511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.130646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.130678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.130848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.130879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.131065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.131095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.131275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.131306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.131502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.131536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.131730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.131761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.131945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.131979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.132089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.132121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.132309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.132340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.132527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.132566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.132669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.132701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.132893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.132925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.133115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.133146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.133349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.133394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.332 qpair failed and we were unable to recover it. 00:28:37.332 [2024-12-05 21:21:45.133646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.332 [2024-12-05 21:21:45.133677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.133869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.133900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.134013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.134043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.134167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.134198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.134443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.134475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.134664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.134697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.134818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.134852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.135105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.135138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.135266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.135300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.135494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.135527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.135733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.135764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.135884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.135915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.136019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.136050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.136245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.136277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.136469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.136502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.136628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.136660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.136832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.136863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.137037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.137068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.137256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.137287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.137585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.137618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.137791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.137821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.138015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.138046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.138295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.138327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.138510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.138542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.138661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.138705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.138908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.138942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.139213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.139244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.139485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.139519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.139647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.139679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.139884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.139917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.140046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.140078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.140319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.140352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.333 [2024-12-05 21:21:45.140468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.333 [2024-12-05 21:21:45.140500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.333 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.140667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.140700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.140936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.140968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.141158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.141197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.141378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.141412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.141584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.141615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.141731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.141763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.141945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.141977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.142091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.142123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.142308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.142341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.142461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.142494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.142602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.142634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.142835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.142866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.142984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.143016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.143215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.143247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.143349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.143393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.143510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.143544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.143812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.143844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.144101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.144135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.144246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.144280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.144451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.144484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.144587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.144621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.144817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.144850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.145071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.145103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.145289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.145321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.145511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.145543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.145804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.145837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.145944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.145976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.146163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.146195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.146300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.146332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.146683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.146757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.146902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.146937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.147211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.147244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.147429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.147466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.147606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.147638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.147827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.147859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.148053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.148084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.148273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.148304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.148495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.334 [2024-12-05 21:21:45.148528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.334 qpair failed and we were unable to recover it. 00:28:37.334 [2024-12-05 21:21:45.148643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.148675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.148970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.149001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.149189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.149220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.149512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.149547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.149806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.149838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.150106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.150137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.150402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.150435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.150676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.150707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.150968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.151000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.151186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.151218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.151410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.151443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.151641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.151672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.151865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.151898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.152014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.152045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.152157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.152189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.152455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.152489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.152621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.152653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.152828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.152860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.153101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.153140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.153322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.153353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.153601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.153634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.153824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.153856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.154112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.154145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.154339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.154378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.154690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.154721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.154892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.154923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.155199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.155231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.155403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.155435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.155622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.155654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.155771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.155802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.155932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.155964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.156232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.156264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.156452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.156486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.156599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.156631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.156806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.156837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.157023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.157055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.157238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.157269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.157455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.157488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.335 [2024-12-05 21:21:45.157691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.335 [2024-12-05 21:21:45.157724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.335 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.157971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.158002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.158289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.158322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.158571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.158604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.158866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.158898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.159139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.159172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.159345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.159386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.159567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.159599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.159869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.159901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.160031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.160064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.160189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.160222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.160410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.160442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.160624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.160655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.160941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.160973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.161149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.161180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.161308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.161339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.161588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.161619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.161816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.161848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.162113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.162144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.162331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.162363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.162557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.162589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.162859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.162892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.163179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.163213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.163401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.163434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.163673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.163706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.163897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.163929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.164167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.164199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.164386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.164419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.164662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.164694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.164980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.165012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.165282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.165314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.165510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.165542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.165796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.165829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.166094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.166126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.166412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.166444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.166697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.166731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.166906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.166939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.167119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.167151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.167268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.167300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.167492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.167525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.167650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.167682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.167943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.167976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.336 qpair failed and we were unable to recover it. 00:28:37.336 [2024-12-05 21:21:45.168099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.336 [2024-12-05 21:21:45.168130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.168305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.168337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.168606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.168639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.168834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.168867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.169129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.169161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.169409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.169443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.169612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.169651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.169916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.169947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.170080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.170113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.170351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.170399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.170575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.170607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.170847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.170879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.171138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.171171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.171386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.171420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.171604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.171636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.171749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.171780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.171976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.172009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.172115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.172147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.172325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.172359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.172580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.172613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.172859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.172892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.172996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.173030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.173230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.173261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.173523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.173556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.173740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.173771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.174014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.174048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.174232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.174264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.174392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.174425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.174629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.174663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.174940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.174971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.175154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.175186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.175407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.175440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.175703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.175734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.175974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.176006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.176200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.176233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.176405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.176438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.176625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.176657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.176841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.176873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.337 qpair failed and we were unable to recover it. 00:28:37.337 [2024-12-05 21:21:45.177045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-12-05 21:21:45.177076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.177352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.177394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.177656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.177688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.177908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.177941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.178182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.178215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.178401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.178433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.178605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.178637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.178823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.178854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.179060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.179091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.179383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.179421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.179610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.179642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.179862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.179895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.180074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.180105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.180397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.180431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.180630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.180662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.180930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.180963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.181200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.181232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.181454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.181487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.181694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.181728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.181841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.181871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.182047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.182080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.182337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.182376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.182577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.182610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.182807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.182838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.183018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.183051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.183237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.183267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.183545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.183578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.183853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.183885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.184070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.184101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.184282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.184314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.184598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.184630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.184839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.184870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.185103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.185135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.185308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.185340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.185623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.185655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.185919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.185951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.186197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.186241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.186424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.186458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.186726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.186759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.186946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.186978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.187240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.187271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.187490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.338 [2024-12-05 21:21:45.187523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.338 qpair failed and we were unable to recover it. 00:28:37.338 [2024-12-05 21:21:45.187718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.187750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.188005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.188037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.188281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.188313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.188567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.188601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.188840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.188872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.189160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.189192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.189310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.189342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.189532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.189565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.189838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.189871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.190056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.190088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.190220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.190252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.190518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.190550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.190723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.190755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.190944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.190976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.191225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.191257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.191461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.191494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.191757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.191789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.191975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.192006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.192196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.192228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.192505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.192536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.192719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.192751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.192991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.193022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.193151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.193183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.193375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.193408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.193653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.193685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.193899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.193930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.194133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.194164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.194337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.194377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.194643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.194674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.194882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.194914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.195144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.195194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.195463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.195494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.195689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.195721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.195904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.195936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.196197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.196229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.196519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.196557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.196821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.196853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.197106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.197138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.197429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.197462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.197666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.197698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.197881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.197912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.198093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.198124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.198414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.198447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.198712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.198744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.199020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.339 [2024-12-05 21:21:45.199053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.339 qpair failed and we were unable to recover it. 00:28:37.339 [2024-12-05 21:21:45.199341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.199380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.199564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.199595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.199804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.199836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.200096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.200128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.200261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.200292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.200537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.200570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.200754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.200784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.200975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.201006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.201210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.201241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.201507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.201541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.201741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.201772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.202039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.202069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.202344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.202382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.202602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.202633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.202896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.202928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.203059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.203091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.203282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.203314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.203516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.203556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.203822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.203854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.204094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.204126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.204408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.204442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.204640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.204671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.204858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.204889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.205011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.205043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.205147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.205179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.205394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.205426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.205693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.205727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.205991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.206024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.206205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.206237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.206440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.206474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.206582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.206616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.206838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.206871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.207087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.207119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.207413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.207446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.207716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.207747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.207920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.207952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.208191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.208223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.208407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.208441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.208679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.208715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.208893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.208924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.209142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.209173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.209438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.209470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.209705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.209737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.209913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.209944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.210137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.340 [2024-12-05 21:21:45.210169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.340 qpair failed and we were unable to recover it. 00:28:37.340 [2024-12-05 21:21:45.210362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.210403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.210622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.210653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.210839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.210870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.211159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.211190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.211405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.211437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.211704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.211735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.212033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.212066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.212208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.212241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.212513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.212547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.212814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.212848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.213023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.213055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.213270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.213301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.213541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.213574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.213840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.213878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.214161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.214193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.214400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.214433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.214714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.214746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.215015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.215046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.215332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.215364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.215511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.215543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.215725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.215757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.215934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.215965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.216188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.216220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.216465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.216498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.216742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.216774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.217023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.217053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.217248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.217280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.217431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.217463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.217722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.217754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.217944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.217976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.218171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.218205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.218408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.218441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.218614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.218648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.218856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.218887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.219155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.219188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.219399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.219432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.219604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.219635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.219901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.219933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.220172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.220204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.220309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.220341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.220613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.220653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.220861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.220893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.221144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.221176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.341 [2024-12-05 21:21:45.221441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.341 [2024-12-05 21:21:45.221474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.341 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.221767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.221798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.222074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.222106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.222236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.222269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.222444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.222477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.222744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.222776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.223018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.223050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.223221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.223253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.223520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.223553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.223799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.223832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.224145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.224177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.224313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.224347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.224646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.224678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.224820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.224851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.225097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.225129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.225385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.225419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.225660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.225693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.225986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.226018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.226285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.226315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.226600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.226634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.226914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.226947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.227194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.227226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.227525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.227558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.227703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.227737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.227948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.227979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.228199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.228233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.228416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.228450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.228580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.228612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.228861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.228893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.229207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.229241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.229380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.229415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.229552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.229583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.229853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.229888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.230166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.230198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.230477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.230509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.230712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.230743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.230886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.230919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.342 [2024-12-05 21:21:45.231188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.342 [2024-12-05 21:21:45.231220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.342 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.231502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.231541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.231800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.231835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.232130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.232161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.232431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.232468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.232664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.232696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.232946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.232978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.233241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.233273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.233568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.233601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.233867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.233899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.234176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.234208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.234341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.234379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.234606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.234638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.234911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.234942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.235214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.235246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.235540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.235573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.235785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.235818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.236071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.236102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.236373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.236407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.236701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.236733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.236883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.236914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.237193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.237225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.237476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.237509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.237802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.237834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.238036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.238068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.238278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.238309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.238552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.238585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.238852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.238883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.239064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.239096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.239380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.239413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.239639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.239671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.239910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.239941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.240202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.240235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.240530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.240564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.240761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.240793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.240979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.241011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.241282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.241314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.241580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.241612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.241891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.343 [2024-12-05 21:21:45.241924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.343 qpair failed and we were unable to recover it. 00:28:37.343 [2024-12-05 21:21:45.242112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.242145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.242332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.242374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.242648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.242682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.242984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.243016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.243211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.243243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.243466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.243500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.243741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.243773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.243981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.244012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.244283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.244315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.244613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.244647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.244836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.244871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.245138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.245169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.245430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.245464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.245722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.245754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.246001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.246032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.246279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.246312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.246616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.246651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.246928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.246963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.247230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.247262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.247451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.247485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.247706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.247737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.247932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.247966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.248091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.248124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.248396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.248428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.248702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.248735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.249028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.249060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.249259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.249291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.249481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.249515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.249784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.249815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.250080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.250113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.250387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.250428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.250710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.250743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.250924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.250958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.251230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.251264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.251455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.251488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.251728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.251761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.251962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.251996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.252296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.252328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.344 [2024-12-05 21:21:45.252539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.344 [2024-12-05 21:21:45.252570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.344 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.252783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.252816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.253032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.253066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.253259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.253292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.253562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.253597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.253883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.253917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.254191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.254224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.254486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.254519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.254798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.254830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.255046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.255078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.255333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.255364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.255671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.255703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.255910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.255942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.256137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.256170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.256365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.256408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.256679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.256711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.257037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.257070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.257361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.257417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.257618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.257651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.257778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.257812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.258093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.258126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.258422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.258458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.258661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.258695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.258942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.258974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.259271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.259303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.259519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.259554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.259842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.259876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.260168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.260199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.260400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.260432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.260709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.260741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.261014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.261048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.261336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.261376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.261665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.261697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.261887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.261924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.262124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.262158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.262433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.262466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.262782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.262816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.263013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.263045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.345 qpair failed and we were unable to recover it. 00:28:37.345 [2024-12-05 21:21:45.263262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.345 [2024-12-05 21:21:45.263295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.263569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.263602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.263824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.263857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.264081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.264113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.264386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.264420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.264565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.264597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.264846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.264878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.265171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.265203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.265444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.265479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.265614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.265647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.265947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.265979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.266275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.266307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.266587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.266621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.266812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.266844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.267115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.267147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.267439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.267471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.267674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.267706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.267883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.267914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.268187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.268220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.268348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.268387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.268601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.268633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.268905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.268938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.269135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.269174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.269448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.269481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.269678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.269712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.269955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.269987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.270163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.270196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.270497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.270529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.270825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.270857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.271155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.271186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.271456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.271490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.271786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.271818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.272087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.272118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.272431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.346 [2024-12-05 21:21:45.272464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.346 qpair failed and we were unable to recover it. 00:28:37.346 [2024-12-05 21:21:45.272716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.272748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.273059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.273091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.273330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.273362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.273529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.273562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.273810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.273842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.274145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.274178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.274467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.274501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.274777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.274809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.275092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.275124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.275279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.275311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.275619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.275652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.275841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.275873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.276051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.276083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.276359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.276401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.276667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.276700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.276997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.277030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.277316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.277350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.277567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.277598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.277851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.277881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.278187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.278218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.278545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.278578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.278833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.278864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.279127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.279158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.279380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.279413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.279691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.279720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.280007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.280038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.280319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.280351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.280512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.280543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.280762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.280793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.280975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.281011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.281188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.281219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.281501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.281535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.281677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.281708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.281957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.281989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.282169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.282199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.282494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.282526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.282717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.282747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.282956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.282987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.347 [2024-12-05 21:21:45.283260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.347 [2024-12-05 21:21:45.283290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.347 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.283489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.283523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.283812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.283845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.284124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.284157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.284445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.284479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.284757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.284790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.285087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.285118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.285389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.285424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.285719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.285751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.286018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.286052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.286250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.286286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.286509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.286543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.286825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.286857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.287140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.287176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.287409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.287444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.287656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.287691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.287890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.287923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.288123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.288157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.288412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.288452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.288734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.288767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.288998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.289030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.289306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.289341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.289628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.289661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.289937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.289970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.290168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.290202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.290460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.290493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.290696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.290729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.291017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.291051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.291252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.291286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.291571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.291605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.291911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.291945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.292206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.292242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.292455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.292490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.292683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.292719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.292919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.292953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.293150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.293184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.293467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.293500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.293731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.293765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.293897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.348 [2024-12-05 21:21:45.293929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.348 qpair failed and we were unable to recover it. 00:28:37.348 [2024-12-05 21:21:45.294188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.294220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.294497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.294534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.294758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.294790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.295007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.295039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.295232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.295264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.295449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.295484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.295759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.295790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.296079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.296113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.296312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.296347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.296638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.296671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.296941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.296974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.297099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.297131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.297403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.297437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.297660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.297696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.297949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.297981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.298239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.298271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.298457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.298490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.298772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.298804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.299021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.299052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.299247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.299278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.299553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.299592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.299895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.299930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.300136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.300167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.300287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.300318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.300555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.300590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.300886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.300919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.301116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.301148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.301336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.301376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.301513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.301546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.301749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.301781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.302006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.302039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.302238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.302270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.302457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.302490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.302729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.302762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.303022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.303054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.303232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.303265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.303475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.303509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.303767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.303801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.349 [2024-12-05 21:21:45.304066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.349 [2024-12-05 21:21:45.304099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.349 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.304231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.304263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.304409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.304444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.304653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.304685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.304890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.304923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.305046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.305078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.305300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.305332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.305600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.305632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.305836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.305868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.306138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.306175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.306379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.306414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.306558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.306592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.306719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.306753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.307031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.307063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.307270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.307305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.307475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.307511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.307705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.307737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.307936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.307968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.308154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.308187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.308392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.308426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.308688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.308721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.308919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.308955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.309148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.309183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.309389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.309425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.309620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.309657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.309862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.309897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.310039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.310072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.310297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.310329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.310544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.310581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.310696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.310728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.310928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.310959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.311098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.311132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.311316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.311348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.311578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.311613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.311822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.350 [2024-12-05 21:21:45.311854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.350 qpair failed and we were unable to recover it. 00:28:37.350 [2024-12-05 21:21:45.311998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.312033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.312304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.312335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.312627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.312662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.312800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.312831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.313091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.313122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.313303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.313335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.313544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.313577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.313829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.313861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.314147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.314179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.314386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.314420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.314648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.314680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.314880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.314912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.315092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.315125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.315266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.315298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.315578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.315612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.315839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.315878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.316055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.316088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.316292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.316325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.316476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.316509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.316785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.316818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.316997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.317030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.317301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.317333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.317594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.317627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.317763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.317795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.318072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.318105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.318313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.318344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.318625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.318658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.318802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.318835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.319060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.319093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.319320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.319352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.319553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.319586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.319724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.319755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.319946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.319978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.320177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.320209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.320437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.320472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.320619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.320654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.320956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.320987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.321171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.321203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.351 qpair failed and we were unable to recover it. 00:28:37.351 [2024-12-05 21:21:45.321458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.351 [2024-12-05 21:21:45.321491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.321794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.321825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.322026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.322059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.322337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.322376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.322645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.322677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.322812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.322844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.323035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.323068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.323357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.323413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.323638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.323669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.323947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.323980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.324273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.324304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.324584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.324619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.324899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.324934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.325148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.325180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.325320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.325352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.325635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.325669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.325946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.325978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.326266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.326298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.326505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.326540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.326816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.326848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.327030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.327062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.327255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.327288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.327492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.327525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.327734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.327767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.328067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.328098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.328381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.328414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.328665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.328697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.328998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.329030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.329297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.329329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.329533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.329566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.329842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.329873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.330136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.330169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.330421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.330454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.330658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.330690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.330892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.330923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.331204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.331238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.331444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.331478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.331757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.331789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.332063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.352 [2024-12-05 21:21:45.332097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.352 qpair failed and we were unable to recover it. 00:28:37.352 [2024-12-05 21:21:45.332349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.332392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.332619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.332652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.332918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.332951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.333190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.333221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.333526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.333560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.333703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.333738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.333990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.334028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.334323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.334359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.334584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.334618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.334832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.334864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.335145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.335177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.335423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.335456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.335577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.335610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.335915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.335949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.336226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.336258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.336541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.336575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.336803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.336837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.337088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.337122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.337382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.337415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.337717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.337749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.338044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.338077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.338302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.338337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.338603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.338638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.338934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.338966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.339238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.339270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.339475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.339511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.339776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.339809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.340079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.340114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.340325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.340357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.340623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.340655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.340848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.340881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.341159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.341192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.341438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.341471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.341682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.341715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.341830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.341862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.342137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.342169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.342452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.342486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.342773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.342807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.353 [2024-12-05 21:21:45.342932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.353 [2024-12-05 21:21:45.342963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.353 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.343182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.343215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.343539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.343572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.343874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.343907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.344171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.344206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.344465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.344499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.344684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.344716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.345000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.345033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.345305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.345337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.345465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.345501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.345781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.345812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.346092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.346124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.346413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.346445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.346672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.346705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.346905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.346940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.347137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.347170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.347449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.347482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.347764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.347798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.348078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.348110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.348375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.348412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.348608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.348643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.348894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.348929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.349226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.349259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.349454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.349487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.349701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.349735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.349935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.349967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.350160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.350193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.350495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.350527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.350719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.350751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.351028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.351060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.351334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.351388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.351670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.351702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.351886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.351922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.352138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.352171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.352388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.352423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.352621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.352653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.352848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.352888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.353094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.353129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.353258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.354 [2024-12-05 21:21:45.353291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.354 qpair failed and we were unable to recover it. 00:28:37.354 [2024-12-05 21:21:45.353548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.353585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.353782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.353814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.354024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.354058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.354259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.354292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.354496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.354531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.354814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.354846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.354963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.354996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.355272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.355307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.355513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.355547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.355728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.355760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.356017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.356051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.356259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.356293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.356568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.356601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.356806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.356838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.356971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.357003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.357265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.357297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.357432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.357466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.357595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.357626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.357882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.357915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.358187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.358218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.358427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.358460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.358587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.358618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.358760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.358793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.359045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.359077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.359365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.359416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.359701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.359733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.360013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.360048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.360189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.360222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.360490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.360526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.360721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.360753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.361043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.361075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.361201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.361234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.361389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.361423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.361692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.361725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.361985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.355 [2024-12-05 21:21:45.362017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.355 qpair failed and we were unable to recover it. 00:28:37.355 [2024-12-05 21:21:45.362141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.362174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.362454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.362489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.362603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.362634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.362892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.362930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.363132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.363164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.363441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.363474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.363680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.363711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.363908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.363940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.364192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.364223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.364430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.364463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.364658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.364694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.364824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.364857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.364988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.365021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.365219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.365251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.365456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.365489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.365602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.365634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.365791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.365823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.365957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.365991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.366265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.366301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.366502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.366535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.366665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.366700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.366847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.366882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.367015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.367048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.367266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.367299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.367497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.367534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.367731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.367763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.368044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.368078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.368227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.368259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.368513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.368549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.368769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.368804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.368995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.369032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.369188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.369220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.369425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.369458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.369583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.369616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.369889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.369920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.370140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.370172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.370390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.370422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.370617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.356 [2024-12-05 21:21:45.370650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.356 qpair failed and we were unable to recover it. 00:28:37.356 [2024-12-05 21:21:45.370938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.370970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.371114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.371147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.371421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.371457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.371680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.371712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.371961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.371994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.372265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.372300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.372591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.372626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.372925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.372956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.373079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.373113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.373374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.373409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.373537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.373569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.373836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.373872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.374004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.374037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.374308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.374342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.374613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.374646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.374790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.374822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.375046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.375078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.375220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.375251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.375444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.375478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.375677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.375709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.375990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.376022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.376275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.376307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.376514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.376548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.376745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.376777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.376972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.377004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.377225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.377258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.377381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.377414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.377629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.377662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.377853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.377884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.378151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.378183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.378387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.378420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.378606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.378638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.378781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.378813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.357 [2024-12-05 21:21:45.379029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.357 [2024-12-05 21:21:45.379068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.357 qpair failed and we were unable to recover it. 00:28:37.633 [2024-12-05 21:21:45.379269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.633 [2024-12-05 21:21:45.379301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.633 qpair failed and we were unable to recover it. 00:28:37.633 [2024-12-05 21:21:45.379505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.633 [2024-12-05 21:21:45.379539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.633 qpair failed and we were unable to recover it. 00:28:37.633 [2024-12-05 21:21:45.379733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.633 [2024-12-05 21:21:45.379765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.633 qpair failed and we were unable to recover it. 00:28:37.633 [2024-12-05 21:21:45.379962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.633 [2024-12-05 21:21:45.379994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.633 qpair failed and we were unable to recover it. 00:28:37.633 [2024-12-05 21:21:45.380131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.633 [2024-12-05 21:21:45.380164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.633 qpair failed and we were unable to recover it. 00:28:37.633 [2024-12-05 21:21:45.380434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.633 [2024-12-05 21:21:45.380467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.633 qpair failed and we were unable to recover it. 00:28:37.633 [2024-12-05 21:21:45.380759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.633 [2024-12-05 21:21:45.380790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.633 qpair failed and we were unable to recover it. 00:28:37.633 [2024-12-05 21:21:45.380928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.633 [2024-12-05 21:21:45.380960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.633 qpair failed and we were unable to recover it. 00:28:37.633 [2024-12-05 21:21:45.381098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.633 [2024-12-05 21:21:45.381130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.633 qpair failed and we were unable to recover it. 00:28:37.633 [2024-12-05 21:21:45.381326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.633 [2024-12-05 21:21:45.381358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.633 qpair failed and we were unable to recover it. 00:28:37.633 [2024-12-05 21:21:45.381603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.381635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.381888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.381921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.382219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.382250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.382457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.382489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.382668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.382700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.382823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.382856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.383104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.383136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.383329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.383362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.383652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.383684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.383888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.383921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.384213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.384244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.384425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.384459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.384665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.384697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.384913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.384947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.385243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.385276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.385545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.385579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.385781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.385820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.386017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.386048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.386198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.386230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.386360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.386401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.386624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.386656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.386904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.386936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.387062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.387094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.387345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.387403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.387654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.387686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.387895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.387927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.388057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.388089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.388336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.388377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.388575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.388609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.388880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.388912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.389162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.389194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.389444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.389478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.389730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.389762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.389984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.390016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.390267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.390299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.390505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.390538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.390720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.634 [2024-12-05 21:21:45.390752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.634 qpair failed and we were unable to recover it. 00:28:37.634 [2024-12-05 21:21:45.391001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.391033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.391223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.391255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.391501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.391534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.391739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.391772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.392026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.392058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.392303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.392335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.392591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.392625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.392854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.392886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.393082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.393114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.393331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.393362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.393673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.393705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.393912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.393944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.394147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.394179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.394388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.394422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.394651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.394683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.394886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.394917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.395117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.395149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.395365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.395417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.395628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.395660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.395867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.395899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.396169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.396206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.396509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.396543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.396826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.396858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.397039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.397070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.397202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.397235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.397444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.397478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.397667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.397699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.397973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.398005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.398279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.398311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.398597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.398629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.398814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.398847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.399051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.399083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.399333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.399365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.399678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.399711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.399951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.399982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.400258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.400290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.400497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.635 [2024-12-05 21:21:45.400530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.635 qpair failed and we were unable to recover it. 00:28:37.635 [2024-12-05 21:21:45.400757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.400790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.401041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.401073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.401331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.401362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.401565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.401597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.401744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.401777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.402046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.402078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.402332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.402365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.402670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.402703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.402964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.402996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.403284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.403316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.403601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.403640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.403922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.403953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.404230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.404262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.404576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.404608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.404816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.404848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.405125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.405157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.405336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.405377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.405654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.405687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.405933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.405966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.406237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.406269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.406562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.406595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.406814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.406847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.407107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.407139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.407395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.407427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.407557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.407590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.407863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.407895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.408075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.408106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.408358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.408399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.408618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.408650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.408843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.408875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.409131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.409164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.409363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.409408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.409602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.409634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.409910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.409942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.410134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.410166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.410430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.636 [2024-12-05 21:21:45.410463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.636 qpair failed and we were unable to recover it. 00:28:37.636 [2024-12-05 21:21:45.410689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.410721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.410925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.410956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.411163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.411196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.411445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.411479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.411734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.411765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.412016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.412048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.412304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.412336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.412622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.412656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.412938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.412971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.413250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.413281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.413542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.413574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.413879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.413912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.414174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.414205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.414424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.414457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.414713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.414745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.415050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.415088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.415345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.415394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.415544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.415576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.415776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.415808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.416089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.416122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.416273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.416305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.416629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.416663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.416953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.416984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.417247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.417279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.417581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.417613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.417808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.417841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.418099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.418131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.418433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.418466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.418656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.418688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.418979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.419010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.419268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.419300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.419556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.419590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.419892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.419924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.420216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.420248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.420447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.420480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.420731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.420763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.420966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.420998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.421189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.637 [2024-12-05 21:21:45.421220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.637 qpair failed and we were unable to recover it. 00:28:37.637 [2024-12-05 21:21:45.421497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.421530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.421752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.421784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.422003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.422036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.422316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.422348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.422627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.422660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.422863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.422895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.423092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.423124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.423404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.423437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.423691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.423722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.424003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.424036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.424295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.424327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.424607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.424640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.424823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.424855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.425039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.425070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.425344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.425385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.425568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.425599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.425880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.425912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.426130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.426163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.426439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.426473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.426763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.426797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.426978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.427010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.427166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.427198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.427450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.427486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.427751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.427784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.428083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.428116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.428390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.428423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.428711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.428744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.428999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.429032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.429249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.429282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.429550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.429583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.429856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.429888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.638 qpair failed and we were unable to recover it. 00:28:37.638 [2024-12-05 21:21:45.430108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.638 [2024-12-05 21:21:45.430141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.430403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.430437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.430730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.430763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.431069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.431104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.431329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.431363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.431593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.431625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.431848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.431880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.432103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.432136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.432418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.432451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.432685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.432718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.432930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.432962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.433213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.433245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.433454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.433487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.433755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.433787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.433986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.434025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.434286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.434318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.434538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.434572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.434771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.434803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.435069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.435102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.435353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.435396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.435536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.435568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.435758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.435789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.436043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.436075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.436326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.436358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.436664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.436697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.436964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.436996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.437283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.437315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.437526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.437559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.437775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.437807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.438069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.438101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.438302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.438337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.438610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.438645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.438852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.438882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.439154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.439186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.439439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.439473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.439677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.439709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.439989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.440020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.639 [2024-12-05 21:21:45.440207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.639 [2024-12-05 21:21:45.440240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.639 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.440514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.440546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.440801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.440833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.441194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.441226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.441490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.441522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.441732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.441765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.441967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.442000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.442201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.442235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.442437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.442471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.442672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.442704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.442957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.442991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.443265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.443297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.443520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.443557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.443784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.443821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.444055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.444087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.444312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.444346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.444521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.444556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.444760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.444794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.444975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.445015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.445195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.445228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.445491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.445524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.445727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.445760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.445898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.445929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.446199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.446231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.446519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.446554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.446853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.446886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.447160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.447191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.447485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.447518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.447749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.447782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.448011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.448043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.448327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.448361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.448616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.448649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.448939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.448973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.449115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.449147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.449399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.449432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.449704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.449736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.449944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.449978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.450234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.640 [2024-12-05 21:21:45.450265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.640 qpair failed and we were unable to recover it. 00:28:37.640 [2024-12-05 21:21:45.450479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.450512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.450762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.450796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.450927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.450958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.451256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.451289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.451499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.451533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.451739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.451772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.452050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.452083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.452379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.452421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.452638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.452669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.452804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.452837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.452975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.453008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.453224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.453260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.453521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.453554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.453822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.453854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.453982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.454015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.454300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.454331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.454595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.454629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.454930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.454962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.455258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.455289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.455509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.455542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.455748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.455781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.455996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.456029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.456284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.456316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.456614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.456648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.456918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.456950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.457150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.457183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.457390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.457424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.457654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.457686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.457937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.457968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.458233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.458266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.458487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.458521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.458823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.458854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.459128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.459160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.459287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.459319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.459596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.459630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.459816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.459848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.460094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.460126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.460400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.460434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.641 qpair failed and we were unable to recover it. 00:28:37.641 [2024-12-05 21:21:45.460668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.641 [2024-12-05 21:21:45.460700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.460896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.460927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.461188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.461220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.461440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.461473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.461686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.461719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.461913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.461945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.462226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.462259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.462512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.462544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.462745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.462777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.462932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.462964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.463258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.463296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.463454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.463488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.463766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.463798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.464010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.464042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.464233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.464264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.464458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.464491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.464747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.464779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.464993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.465025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.465290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.465323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.465589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.465621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.465829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.465862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.466146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.466177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.466494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.466527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.466661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.466693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.466909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.466942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.467161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.467193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.467385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.467420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.467576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.467608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.467891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.467923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.468163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.468195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.468466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.468498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.468786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.468818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.469035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.469068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.469342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.469380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.469587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.469620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.469802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.469834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.470102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.642 [2024-12-05 21:21:45.470134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.642 qpair failed and we were unable to recover it. 00:28:37.642 [2024-12-05 21:21:45.470391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.470429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.470684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.470717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.470965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.470996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.471302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.471334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.471634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.471667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.471888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.471920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.472103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.472135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.472350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.472408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.472593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.472625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.472873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.472905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.473109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.473140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.473420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.473453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.473660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.473693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.473973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.474005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.474265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.474298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.474503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.474536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.474807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.474839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.475138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.475170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.475441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.475473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.475744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.475776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.476044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.476075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.476401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.476433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.476711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.476743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.476890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.476922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.477121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.477152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.477425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.477459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.477718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.477751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.477958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.477989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.478277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.478310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.478456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.478489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.478688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.478719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.478946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.478978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.643 [2024-12-05 21:21:45.479279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.643 [2024-12-05 21:21:45.479311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.643 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.479581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.479615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.479865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.479896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.480178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.480210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.480514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.480547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.480745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.480776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.480963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.480995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.481247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.481278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.481554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.481587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.481869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.481907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.482118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.482149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.482403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.482437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.482696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.482727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.483033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.483065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.483332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.483364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.483653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.483684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.483820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.483851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.484155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.484187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.484488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.484522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.484823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.484855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.485067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.485099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.485391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.485425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.485617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.485649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.485855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.485888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.486069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.486102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.486391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.486423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.486630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.486662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.486948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.486979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.487263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.487295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.487552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.487586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.487890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.487922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.488213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.488245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.488521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.488554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.488846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.488879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.489153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.489185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.489472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.489506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.489785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.489825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.490037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.644 [2024-12-05 21:21:45.490069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.644 qpair failed and we were unable to recover it. 00:28:37.644 [2024-12-05 21:21:45.490342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.490382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.490638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.490670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.490872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.490904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.491180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.491211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.491464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.491497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.491698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.491730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.492006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.492038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.492258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.492290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.492571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.492604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.492828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.492860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.493138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.493170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.493298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.493330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.493652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.493687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.493964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.493995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.494199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.494230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.494509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.494542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.494842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.494874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.495142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.495174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.495353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.495397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.495666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.495699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.495975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.496007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.496298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.496330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.496633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.496667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.496932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.496964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.497221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.497251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.497559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.497593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.497872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.497904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.498112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.498144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.498424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.498456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.498649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.498681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.498860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.498891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.499096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.499129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.499327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.499358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.499546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.499577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.499851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.499883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.500164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.500195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.645 [2024-12-05 21:21:45.500410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.645 [2024-12-05 21:21:45.500443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.645 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.500629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.500661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.500773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.500806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.501003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.501041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.501317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.501349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.501659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.501691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.501965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.501996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.502286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.502318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.502531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.502565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.502819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.502852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.503154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.503186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.503451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.503484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.503690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.503723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.504000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.504032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.504311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.504344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.504652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.504685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.504944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.504976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.505197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.505229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.505450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.505483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.505679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.505711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.505925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.505956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.506230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.506261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.506563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.506596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.506810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.506842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.507120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.507152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.507396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.507430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.507716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.507748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.507897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.507929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.508135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.508167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.508459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.508493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.508784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.508816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.509043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.509076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.509281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.509312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.509628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.509662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.509938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.509970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.510251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.510283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.510571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.510604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.510880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.510913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.646 [2024-12-05 21:21:45.511206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.646 [2024-12-05 21:21:45.511238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.646 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.511499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.511532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.511768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.511800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.511994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.512025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.512302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.512333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.512627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.512661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.512850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.512927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.513258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.513295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.513596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.513632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.513834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.513867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.514154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.514186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.514471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.514506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.514724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.514756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.515031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.515063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.515355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.515399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.515603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.515634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.515816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.515848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.516130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.516163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.516431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.516463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.516763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.516805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.517008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.517040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.517320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.517352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.517587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.517619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.517877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.517908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.518117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.518149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.518349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.518395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.518590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.518622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.518898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.518930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.519217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.519249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.519391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.519424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.519608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.519640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.519870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.519902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.647 [2024-12-05 21:21:45.520204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.647 [2024-12-05 21:21:45.520235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.647 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.520524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.520558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.520837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.520868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.521155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.521187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.521416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.521451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.521732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.521763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.522017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.522050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.522309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.522342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.522649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.522681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.522932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.522964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.523187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.523220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.523489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.523521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.523744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.523776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.523981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.524013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.524245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.524278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.524537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.524570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.524824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.524855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.525044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.525075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.525277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.525308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.525591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.525624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.525822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.525854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.526152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.526184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.526390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.526422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.526700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.526732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.527015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.527046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.527334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.527365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.527604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.527636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.527906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.527938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.528199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.528231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.528425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.528458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.528679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.528711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.528977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.529008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.529202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.529234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.529451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.529484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.529761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.529792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.530012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.530043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.530321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.648 [2024-12-05 21:21:45.530352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.648 qpair failed and we were unable to recover it. 00:28:37.648 [2024-12-05 21:21:45.530614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.530645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.530909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.530940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.531160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.531191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.531443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.531477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.531684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.531716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.531939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.531971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.532272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.532304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.532561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.532593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.532901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.532932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.533131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.533162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.533413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.533446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.533649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.533681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.533961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.533992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.534294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.534325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.534588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.534621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.534833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.534864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.535145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.535177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.535457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.535497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.535777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.535808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.536061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.536092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.536358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.536415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.536716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.536748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.536942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.536973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.537154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.537187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.537465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.537499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.537625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.537657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.537935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.537967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.538234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.538266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.538459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.538492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.538768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.538800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.538996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.539028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.539332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.539364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.539619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.539651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.539926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.539958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.540210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.540241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.540543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.540577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.540845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.540877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.541069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.649 [2024-12-05 21:21:45.541101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.649 qpair failed and we were unable to recover it. 00:28:37.649 [2024-12-05 21:21:45.541355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.541403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.541692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.541724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.541943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.541974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.542252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.542284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.542551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.542585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.542904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.542936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.543155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.543187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.543389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.543422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.543687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.543719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.543996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.544027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.544219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.544249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.544461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.544495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.544643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.544675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.544930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.544962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.545263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.545296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.545586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.545619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.545818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.545851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.546076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.546108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.546388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.546421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.546707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.546745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.547019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.547051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.547246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.547278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.547481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.547514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.547793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.547824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.548105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.548136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.548336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.548376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.548560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.548592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.548869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.548900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.549102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.549134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.549329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.549361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.549595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.549626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.549768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.549800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.550026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.550058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.550278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.550310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.550571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.550604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.550847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.550879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.551130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.551161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.650 [2024-12-05 21:21:45.551477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.650 [2024-12-05 21:21:45.551510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.650 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.551794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.551826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.552081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.552112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.552390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.552423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.552708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.552740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.552965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.552996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.553178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.553209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.553418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.553452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.553727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.553758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.554018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.554050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.554252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.554283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.554407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.554440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.554718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.554749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.555048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.555080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.555274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.555306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.555595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.555628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.555818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.555850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.556123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.556155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.556357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.556400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.556630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.556661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.556853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.556884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.557201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.557232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.557493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.557539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.557735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.557767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.558043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.558075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.558389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.558421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.558693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.558724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.558910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.558942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.559216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.559248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.559543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.559577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.559849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.559880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.560171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.560203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.560335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.560384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.560589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.560621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.560832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.560864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.561117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.561149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.561437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.561471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.561695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.561727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.651 [2024-12-05 21:21:45.561951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.651 [2024-12-05 21:21:45.561982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.651 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.562236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.562267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.562534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.562566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.562851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.562882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.563165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.563197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.563482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.563515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.563798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.563829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.564092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.564124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.564328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.564359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.564631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.564664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.564860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.564891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.565132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.565163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.565364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.565408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.565634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.565665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.565852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.565884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.566149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.566181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.566366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.566409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.566679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.566711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.566914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.566945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.567140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.567172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.567357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.567400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.567656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.567689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.567872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.567904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.568182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.568213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.568415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.568454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.568662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.568694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.568911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.568943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.569125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.569156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.569410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.569444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.569698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.569729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.569934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.569966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.570259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.570290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.570513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.570546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.570825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.652 [2024-12-05 21:21:45.570855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.652 qpair failed and we were unable to recover it. 00:28:37.652 [2024-12-05 21:21:45.571044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.571076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.571346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.571386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.571589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.571621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.571801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.571832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.572117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.572149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.572405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.572439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.572695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.572727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.573027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.573060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.573263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.573295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.573449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.573483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.573793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.573825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.574103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.574134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.574415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.574448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.574736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.574768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.575049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.575080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.575350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.575390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.575681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.575714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.575988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.576020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.576172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.576203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.576503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.576537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.576821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.576853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.577130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.577162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.577349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.577391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.577587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.577618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.577892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.577924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.578131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.578162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.578283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.578314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.578598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.578630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.578930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.578961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.579104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.579136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.579392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.579431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.579718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.579749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.580043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.580074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.580256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.580287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.580538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.580571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.580719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.580750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.580968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.580999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.581297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.653 [2024-12-05 21:21:45.581328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.653 qpair failed and we were unable to recover it. 00:28:37.653 [2024-12-05 21:21:45.581602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.581636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.581891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.581922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.582139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.582170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.582366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.582408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.582663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.582694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.582994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.583025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.583246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.583277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.583466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.583499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.583702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.583733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.583848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.583880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.584028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.584060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.584192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.584223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.584486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.584519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.584792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.584823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.585019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.585050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.585248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.585280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.585557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.585590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.585811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.585842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.586037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.586069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.586271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.586302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.586557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.586590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.586843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.586874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.587175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.587207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.587403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.587437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.587715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.587746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.587946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.587977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.588237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.588268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.588472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.588506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.588779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.588810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.589013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.589044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.589260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.589292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.589520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.589552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.589755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.589793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.590067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.590099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.590380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.590412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.590637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.590668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.590942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.590973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.654 [2024-12-05 21:21:45.591269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.654 [2024-12-05 21:21:45.591300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.654 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.591531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.591563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.591787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.591819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.592073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.592105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.592405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.592438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.592563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.592595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.592813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.592843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.593093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.593125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.593394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.593428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.593551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.593582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.593854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.593885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.594087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.594119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.594316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.594347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.594632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.594664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.594860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.594891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.595155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.595186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.595486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.595518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.595769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.595800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.596026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.596058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.596346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.596396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.596595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.596627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.596882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.596913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.597105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.597137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.597392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.597425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.597622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.597654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.597844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.597875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.598072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.598104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.598285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.598317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.598620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.598652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.598945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.598978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.599161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.599193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.599393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.599425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.599611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.599643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.599895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.599927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.600121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.600153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.600433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.600472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.600678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.600710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.600894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.600925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.655 qpair failed and we were unable to recover it. 00:28:37.655 [2024-12-05 21:21:45.601208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.655 [2024-12-05 21:21:45.601239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.601443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.601477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.601664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.601695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.601978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.602009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.602268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.602301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.602582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.602614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.602904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.602936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.603216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.603247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.603539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.603572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.603829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.603861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.604164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.604195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.604478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.604512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.604714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.604744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.604988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.605020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.605320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.605352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.605566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.605598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.605785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.605816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.606099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.606131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.606341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.606382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.606669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.606701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.606975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.607007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.607295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.607327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.607608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.607641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.607898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.607931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.608311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.608397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.608707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.608742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.608964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.608997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.609195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.609228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.609507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.609540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.609864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.609895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.610123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.610156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.610452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.610485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.610676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.610709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.611009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.611042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.611291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.611323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.611624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.611658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.611959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.611992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.612194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.612235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.656 qpair failed and we were unable to recover it. 00:28:37.656 [2024-12-05 21:21:45.612517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.656 [2024-12-05 21:21:45.612551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.612809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.612842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.613094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.613126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.613432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.613466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.613688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.613720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.613974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.614008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.614309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.614344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.614642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.614677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.614971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.615005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.615224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.615257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.615454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.615490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.615744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.615776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.616032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.616065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.616269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.616303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.616576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.616610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.616807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.616840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.617040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.617071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.617268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.617299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.617579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.617612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.617869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.617902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.618178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.618211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.618496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.618531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.618793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.618828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.619044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.619077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.619265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.619300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.619487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.619520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.619901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.619982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.620218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.620254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.620566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.620605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.620883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.620916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.621195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.621227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.621516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.621551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.621829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.621863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.622165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.622196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.657 [2024-12-05 21:21:45.622465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.657 [2024-12-05 21:21:45.622500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.657 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.622732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.622768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.622989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.623023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.623220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.623253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.623498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.623534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.623819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.623852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.624160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.624193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.624473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.624523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.624797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.624832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.624969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.625001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.625208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.625239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.625427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.625461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.625775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.625809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.626086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.626118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.626271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.626304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.626527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.626561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.626766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.626801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.627056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.627088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.627348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.627391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.627680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.627720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.627952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.627985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.630592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.630629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.630827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.630858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.630982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.631014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.633542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.633578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.633786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.633818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.634023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.634055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.634337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.634384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.634653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.634684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.634913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.634945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.635198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.635231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.635513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.635548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.635833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.635865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.636108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.636140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.636396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.636428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.636690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.636721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.637023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.637055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.637258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.637290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.637568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.658 [2024-12-05 21:21:45.637601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.658 qpair failed and we were unable to recover it. 00:28:37.658 [2024-12-05 21:21:45.637881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.637916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.638171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.638203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.638484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.638517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.638763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.638794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.638988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.639020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.639322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.639356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.639574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.639608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.639870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.639909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.640164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.640197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.640337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.640380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.640604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.640639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.640851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.640886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.641161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.641193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.641337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.641380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.641500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.641534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.641716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.641750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.641950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.641985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.642202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.642239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.642443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.642480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.642607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.642642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.642846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.642882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.643224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.643301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.643547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.643587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.643851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.643884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.644137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.644169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.644384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.644419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.644625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.644658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.644880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.644912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.645136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.645169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.645404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.645447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.645712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.645745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.645930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.645963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.646173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.659 [2024-12-05 21:21:45.646205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.659 qpair failed and we were unable to recover it. 00:28:37.659 [2024-12-05 21:21:45.646475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.646509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.646710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.646758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.647040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.647071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.647288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.647321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.647475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.647511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.647786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.647820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.648047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.648080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.648279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.648311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.648558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.648591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.648788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.648820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.649074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.649106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.649307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.649339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.649660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.649697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.649894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.649925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.650060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.650092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.650321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.650354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.650649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.650681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.650935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.650968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.651108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.651143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.651359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.651404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.651603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.651636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.651841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.651874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.652100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.652132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.652346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.652396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.652602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.652639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.652855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.652887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.653091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.653126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.653365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.653407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.653539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.653577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.653851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.653885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.654135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.654167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.654420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.654455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.654638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.654670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.654871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.654904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.655101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.655136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.655262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.655293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.655498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.655531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.655787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.660 [2024-12-05 21:21:45.655820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.660 qpair failed and we were unable to recover it. 00:28:37.660 [2024-12-05 21:21:45.655953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.655985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.656257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.656290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.656562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.656595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.656853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.656885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.657169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.657202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.657409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.657444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.657697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.657728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.658024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.658056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.658260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.658292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.658494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.658527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.658722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.658755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.658956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.658988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.659268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.659301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.659519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.659552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.659805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.659837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.660090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.660123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.660330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.660361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.660632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.660665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.660794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.660826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.661120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.661152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.661304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.661336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.661534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.661567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.661817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.661850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.662124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.662156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.662378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.662412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.662638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.662671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.662921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.662953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.663143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.663176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.663453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.663486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.661 qpair failed and we were unable to recover it. 00:28:37.661 [2024-12-05 21:21:45.663737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.661 [2024-12-05 21:21:45.663770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.663952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.663984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.664267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.664305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.664509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.664542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.664821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.664854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.665124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.665155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.665365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.665406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.665595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.665626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.665853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.665886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.666140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.666172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.666456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.666489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.666774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.666807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.667009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.667042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.667266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.667299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.667508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.667543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.667849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.667882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.668205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.668237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.668513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.668547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.668742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.668774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.668978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.669011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.669284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.669318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.669594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.669626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.669918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.669950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.670198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.670231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.670425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.670458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.670663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.670695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.670894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.670927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.671141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.671173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.671365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.671408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.671594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.671631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.671833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.671868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.672149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.672182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.672337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.672382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.672667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.672699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.672840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.672872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.673127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.662 [2024-12-05 21:21:45.673158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.662 qpair failed and we were unable to recover it. 00:28:37.662 [2024-12-05 21:21:45.673464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.673497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.673789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.673822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.674125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.674157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.674422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.674455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.674754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.674786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.674984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.675016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.675239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.675271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.675559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.675594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.675815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.675847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.676048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.676079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.676280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.676313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.676512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.676545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.676823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.676856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.677123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.677155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.677458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.677491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.677752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.677785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.677984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.678015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.678243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.678275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.678575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.678608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.678830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.678863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.679132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.679164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.679444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.679481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.679765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.679800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.680001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.680033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.680284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.680317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.680526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.680559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.680837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.680869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.681056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.681088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.681290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.681324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.681555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.681587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.681808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.681840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.682102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.682135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.663 [2024-12-05 21:21:45.682392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.663 [2024-12-05 21:21:45.682426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.663 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.682684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.682717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.683017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.683060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.683283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.683315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.683604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.683637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.683771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.683803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.684012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.684044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.684318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.684352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.684650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.684683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.684823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.684857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.685133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.685165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.685381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.685414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.685618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.685650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.685773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.685807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.686001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.686033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.686301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.686334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.686655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.686689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.686964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.686996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.687200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.687232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.687506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.687539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.687822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.687854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.688072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.688105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.688314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.688345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.688654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.688687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.688946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.688979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.689233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.689265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.689391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.689426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.689679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.689714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.689824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.689857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.690134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.690165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.690358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.690402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.690615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.690648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.690925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.690961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.691257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.691290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.691514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.691549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.691802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.691837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.692113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.692144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.692413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.664 [2024-12-05 21:21:45.692446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.664 qpair failed and we were unable to recover it. 00:28:37.664 [2024-12-05 21:21:45.692731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.692764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.692978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.693012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.693290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.693323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.693634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.693668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.693933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.693965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.694241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.694273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.694528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.694561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.694866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.694900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.695182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.695213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.695478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.695511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.695736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.695768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.696024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.696056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.696336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.696388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.696659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.696691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.696898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.696931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.697124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.697157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.697420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.697455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.697752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.697784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.698048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.698081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.698391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.698427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.698684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.698717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.698907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.698940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.699147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.699180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.699437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.699470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.699766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.699800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.699940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.699972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.700173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.700207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.700398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.700433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.700646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.700680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.700887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.700922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.701116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.701150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.701406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.701439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.701694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.701733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.701989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.665 [2024-12-05 21:21:45.702021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-12-05 21:21:45.702306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.702338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.702624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.702659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.702866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.702897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.703158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.703190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.703318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.703353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.703643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.703678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.703932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.703965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.704271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.704303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.704564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.704599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.704803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.704834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.705027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.705060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.705185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.705217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.705498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.705534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.705802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.705834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.706129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.706162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.706456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.706492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.706762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.706796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.707009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.707041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.707318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.707353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.707492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.707524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.707635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.707669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.707888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.707922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.708131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.708164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.708466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.708500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.708687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.708721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.709024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.709058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.709321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.709356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.709595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.709628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.709841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.709873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.710151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.710187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.710396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.710431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.710618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.710651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.710906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.710939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.666 qpair failed and we were unable to recover it. 00:28:37.666 [2024-12-05 21:21:45.711123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.666 [2024-12-05 21:21:45.711159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.711343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.711388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.711592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.711626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.711931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.711965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.712174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.712207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.712363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.712411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.712615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.712651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.712922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.712953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.713228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.713261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.713540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.713574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.713775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.713807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.714011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.714044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.714268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.714300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.714512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.714545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.714749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.714783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.714988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.715024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.715181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.715216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.715438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.715471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.715668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.715700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.715829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.715862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.716068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.716101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.716388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.716422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.716647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.716680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.716902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.716934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.717113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.717147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.717348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.717400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.717613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.717648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.717756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.717789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.717989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.718021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.718280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.718313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.718524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.718561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.718691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.718723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.718880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.718912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.719065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.719104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.719246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.719277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.719479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.719512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.719653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.719685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.719886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.667 [2024-12-05 21:21:45.719919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.667 qpair failed and we were unable to recover it. 00:28:37.667 [2024-12-05 21:21:45.720050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.668 [2024-12-05 21:21:45.720082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.668 qpair failed and we were unable to recover it. 00:28:37.668 [2024-12-05 21:21:45.720406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.668 [2024-12-05 21:21:45.720440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.668 qpair failed and we were unable to recover it. 00:28:37.668 [2024-12-05 21:21:45.720724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.668 [2024-12-05 21:21:45.720756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.668 qpair failed and we were unable to recover it. 00:28:37.668 [2024-12-05 21:21:45.721038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.668 [2024-12-05 21:21:45.721073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.668 qpair failed and we were unable to recover it. 00:28:37.668 [2024-12-05 21:21:45.721361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.668 [2024-12-05 21:21:45.721408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.668 qpair failed and we were unable to recover it. 00:28:37.668 [2024-12-05 21:21:45.721674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.668 [2024-12-05 21:21:45.721706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.668 qpair failed and we were unable to recover it. 00:28:37.668 [2024-12-05 21:21:45.721955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.668 [2024-12-05 21:21:45.721989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.668 qpair failed and we were unable to recover it. 00:28:37.668 [2024-12-05 21:21:45.722185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.668 [2024-12-05 21:21:45.722217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.668 qpair failed and we were unable to recover it. 00:28:37.668 [2024-12-05 21:21:45.722405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.668 [2024-12-05 21:21:45.722438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.668 qpair failed and we were unable to recover it. 00:28:37.668 [2024-12-05 21:21:45.722724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.668 [2024-12-05 21:21:45.722757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.668 qpair failed and we were unable to recover it. 00:28:37.942 [2024-12-05 21:21:45.723016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.942 [2024-12-05 21:21:45.723051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.942 qpair failed and we were unable to recover it. 00:28:37.942 [2024-12-05 21:21:45.723245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.942 [2024-12-05 21:21:45.723277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.942 qpair failed and we were unable to recover it. 00:28:37.942 [2024-12-05 21:21:45.723557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.942 [2024-12-05 21:21:45.723590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.942 qpair failed and we were unable to recover it. 00:28:37.942 [2024-12-05 21:21:45.723797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.942 [2024-12-05 21:21:45.723830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.942 qpair failed and we were unable to recover it. 00:28:37.942 [2024-12-05 21:21:45.724104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.942 [2024-12-05 21:21:45.724137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.942 qpair failed and we were unable to recover it. 00:28:37.942 [2024-12-05 21:21:45.724291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.942 [2024-12-05 21:21:45.724322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.942 qpair failed and we were unable to recover it. 00:28:37.942 [2024-12-05 21:21:45.724610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.942 [2024-12-05 21:21:45.724643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.942 qpair failed and we were unable to recover it. 00:28:37.942 [2024-12-05 21:21:45.724876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.942 [2024-12-05 21:21:45.724908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.942 qpair failed and we were unable to recover it. 00:28:37.942 [2024-12-05 21:21:45.725121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.942 [2024-12-05 21:21:45.725157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.942 qpair failed and we were unable to recover it. 00:28:37.942 [2024-12-05 21:21:45.725289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.942 [2024-12-05 21:21:45.725321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.942 qpair failed and we were unable to recover it. 00:28:37.942 [2024-12-05 21:21:45.725527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.942 [2024-12-05 21:21:45.725562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.942 qpair failed and we were unable to recover it. 00:28:37.942 [2024-12-05 21:21:45.725759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.942 [2024-12-05 21:21:45.725794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.942 qpair failed and we were unable to recover it. 00:28:37.942 [2024-12-05 21:21:45.726048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.942 [2024-12-05 21:21:45.726079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.942 qpair failed and we were unable to recover it. 00:28:37.942 [2024-12-05 21:21:45.726269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.942 [2024-12-05 21:21:45.726304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.942 qpair failed and we were unable to recover it. 00:28:37.942 [2024-12-05 21:21:45.726514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.726548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.726774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.726808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.726995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.727028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.727240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.727273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.727547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.727582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.727715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.727748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.727975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.728010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.728309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.728341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.728515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.728572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.728857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.728890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.729102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.729134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.729276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.729310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.729606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.729646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.729902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.729934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.730195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.730230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.730350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.730394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.730601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.730634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.730826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.730860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.731139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.731172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.731391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.731426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.731710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.731741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.732015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.732047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.732364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.732410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.732609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.732640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.732914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.732947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.733149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.733182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.733451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.733486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.733691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.733723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.733873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.733907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.734184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.734220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.734402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.734438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.734661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.734695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.735000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.735033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.735317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.735349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.735563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.735596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.735855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.735890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.736169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.736202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.736472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.736507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.943 qpair failed and we were unable to recover it. 00:28:37.943 [2024-12-05 21:21:45.736713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.943 [2024-12-05 21:21:45.736746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.737003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.737046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.737228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.737260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.737542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.737579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.737846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.737880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.738086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.738121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.738303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.738335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.738614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.738647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.738901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.738936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.739202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.739234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.739520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.739553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.739837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.739869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.740150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.740181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.740392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.740425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.740680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.740711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.740919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.740952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.741061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.741093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.741298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.741331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.741532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.741566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.741798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.741833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.742136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.742169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.742380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.742416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.742619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.742653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.742853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.742886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.743152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.743184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.743457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.743491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.743790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.743822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.744093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.744126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.744345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.744387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.744544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.744576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.744863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.744898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.745100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.745133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.745363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.745407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.745633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.745667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.745941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.745973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.746207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.746239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.746435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.746467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.746749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.746782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.944 [2024-12-05 21:21:45.747068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.944 [2024-12-05 21:21:45.747100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.944 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.747296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.747329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.747601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.747635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.747920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.747952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.748236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.748274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.748553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.748586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.748729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.748762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.748965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.748998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.749289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.749322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.749619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.749652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.749899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.749933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.750062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.750093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.750286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.750319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.750627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.750660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.750934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.750965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.751260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.751291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.751573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.751607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.751887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.751919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.752105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.752138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.752410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.752444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.752655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.752687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.752941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.752973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.753109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.753142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.753280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.753311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.753558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.753592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.753777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.753808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.754061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.754095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.754378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.754412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.754610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.754642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.754852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.754885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.755188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.755219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.755491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.755529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.755823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.755856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.756059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.756091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.756291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.756323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.756562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.756595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.756878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.756910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.757194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.757226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.757513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.945 [2024-12-05 21:21:45.757548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.945 qpair failed and we were unable to recover it. 00:28:37.945 [2024-12-05 21:21:45.757806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.757838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.758047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.758078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.758271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.758304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.758505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.758538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.758749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.758781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.759064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.759097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.759325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.759358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.759644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.759677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.759934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.759966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.760173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.760206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.760485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.760518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.760803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.760835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.761035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.761072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.761396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.761429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.761708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.761743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.762042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.762075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.762337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.762382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.762671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.762705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.762974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.763009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.763247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.763279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.763500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.763534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.763812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.763844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.764136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.764170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.764417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.764453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.764666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.764698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.764903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.764938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.765204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.765238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.765536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.765570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.765865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.765897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.766173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.766206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.766500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.766532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.766758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.766792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.766981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.767014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.767219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.767259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.767530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.767565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.767768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.767802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.767984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.768017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.768208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.768241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.768531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.946 [2024-12-05 21:21:45.768565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.946 qpair failed and we were unable to recover it. 00:28:37.946 [2024-12-05 21:21:45.768867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.768899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.769160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.769192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.769328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.769364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.769632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.769666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.769938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.769972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.770181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.770212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.770492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.770527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.770807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.770842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.771122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.771155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.771401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.771435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.771630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.771662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.771958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.771989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.772170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.772202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.772479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.772511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.772651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.772683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.772886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.772919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.773123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.773156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.773434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.773470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.773751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.773783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.773914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.773949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.774226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.774260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.774540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.774579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.774718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.774750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.774865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.774898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.775169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.775202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.775484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.775520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.775710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.775742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.776024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.776059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.776311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.776345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.776630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.776664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.776944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.776980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.777265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.947 [2024-12-05 21:21:45.777299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.947 qpair failed and we were unable to recover it. 00:28:37.947 [2024-12-05 21:21:45.777529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.777564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.777750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.777783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.777991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.778026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.778307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.778340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.778627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.778660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.778843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.778877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.779145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.779179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.779386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.779421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.779541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.779574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.779862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.779897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.780171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.780204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.780428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.780462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.780742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.780777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.780959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.780992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.781262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.781295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.781578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.781613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.781893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.781924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.782115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.782147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.782424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.782457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.782601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.782633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.782838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.782869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.783074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.783108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.783311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.783343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.783492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.783526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.783759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.783790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.783902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.783934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.784040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.784072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.784345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.784391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.784518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.784549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.784682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.784714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.784911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.784948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.785219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.785251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.785508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.785541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.785666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.785699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.785976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.786009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.786260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.786292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.786494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.786527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.786666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.786698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.948 [2024-12-05 21:21:45.786898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.948 [2024-12-05 21:21:45.786929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.948 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.787076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.787107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.787343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.787386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.787515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.787547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.787821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.787853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.788051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.788083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.788283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.788315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.788563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.788597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.788885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.788918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.789197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.789228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.789390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.789424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.789704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.789736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.790013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.790045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.790194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.790226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.790448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.790481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.790768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.790800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.791136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.791167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.791392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.791424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.791626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.791659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.791797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.791829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.792159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.792191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.792387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.792421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.792621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.792653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.792927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.792958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.793239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.793271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.793504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.793538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.793812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.793843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.794113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.794145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.794431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.794464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.794746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.794778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.795051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.795082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.795270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.795302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.795575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.795609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.795971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.796051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.796283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.796318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.796603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.796638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.796926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.796960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.797274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.797305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.797611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.797645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.949 qpair failed and we were unable to recover it. 00:28:37.949 [2024-12-05 21:21:45.797851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.949 [2024-12-05 21:21:45.797884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.798067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.798099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.798388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.798421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.798682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.798714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.798930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.798963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.799217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.799249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.799447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.799481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.799672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.799714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.799844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.799876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.800056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.800088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.800360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.800401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.800673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.800706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.800992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.801025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.801219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.801252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.801508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.801542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.801794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.801826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.802099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.802131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.802410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.802443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.802728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.802760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.803026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.803058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.803310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.803342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.803658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.803691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.803943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.803974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.804264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.804295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.804575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.804608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.804899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.804932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.805209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.805241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.805458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.805491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.805795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.805827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.806090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.806121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.806428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.806461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.806723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.806756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.807008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.807041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.807260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.807292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.807498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.807536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.807726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.807758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.807973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.808005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.808283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.808314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.950 qpair failed and we were unable to recover it. 00:28:37.950 [2024-12-05 21:21:45.808602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.950 [2024-12-05 21:21:45.808636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.808915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.808947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.809137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.809167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.809395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.809429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.809706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.809739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.810019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.810051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.810342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.810381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.810656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.810688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.810892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.810924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.811145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.811176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.811482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.811516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.811722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.811754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.811961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.811993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.812186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.812219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.812400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.812433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.812570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.812603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.812829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.812861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.813072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.813104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.813357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.813413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.813643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.813675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.813930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.813961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.814219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.814251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.814530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.814562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.814778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.814810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.815063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.815095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.815381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.815415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.815696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.815727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.816001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.816033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.816326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.816357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.816640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.816672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.816902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.816934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.817237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.817268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.817476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.817510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.817786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.817818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.951 [2024-12-05 21:21:45.818012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.951 [2024-12-05 21:21:45.818044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.951 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.818306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.818337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.818559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.818598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.818854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.818886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.819069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.819101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.819318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.819349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.819561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.819593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.819791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.819823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.820007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.820038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.820312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.820343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.820497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.820529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.820732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.820765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.820964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.820996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.821249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.821282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.821535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.821568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.821824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.821856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.822117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.822149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.822449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.822482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.822778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.822810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.823037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.823070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.823382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.823415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.823691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.823723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.823926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.823958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.824161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.824193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.824459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.824492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.824636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.824668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.824941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.824973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.825230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.825262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.825396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.825429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.825644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.825676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.826010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.826041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.826313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.826345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.826649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.826682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.826866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.826897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.827162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.827193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.952 [2024-12-05 21:21:45.827393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.952 [2024-12-05 21:21:45.827426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.952 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.827611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.827643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.827935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.827968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.828223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.828255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.828588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.828623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.828829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.828864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.829186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.829220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.829420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.829460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.829731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.829764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.829905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.829937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.830129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.830160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.830361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.830407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.830683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.830715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.830871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.830903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.831040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.831071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.831291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.831323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.831644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.831677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.831929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.831960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.832262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.832294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.832581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.832615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.832733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.832765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.833003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.833036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.833347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.833393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.833669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.833702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.833895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.833927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.834204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.834237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.834520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.834553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.834815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.834848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.835008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.835041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.835342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.835383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.835580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.835612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.835830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.835862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.836069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.836101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.836231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.836262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.836493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.836527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.836817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.836849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.837007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.837038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.837319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.837351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-12-05 21:21:45.837642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.953 [2024-12-05 21:21:45.837675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.837809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.837842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.838128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.838160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.838421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.838455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.838638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.838670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.838867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.838899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.839177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.839209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.839426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.839459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.839687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.839719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.839866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.839903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.840177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.840211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.840440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.840472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.840670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.840702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.840897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.840931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.841144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.841176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.841461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.841494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.841773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.841806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.842092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.842125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.842268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.842300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.842509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.842542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.842760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.842793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.842985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.843018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.843298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.843331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.843555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.843590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.843731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.843763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.843997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.844029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.844309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.844341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.844660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.844693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.844885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.844917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.845100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.845132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.845334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.845378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.845662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.845694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.845913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.845945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.846198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.846230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.846546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.846579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.846877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.846909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.847118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.847152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.847349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.847393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-12-05 21:21:45.847593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.954 [2024-12-05 21:21:45.847625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.847902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.847934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.848277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.848309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.848562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.848595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.848856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.848888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.849099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.849131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.849386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.849419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.849626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.849659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.849858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.849890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.850074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.850106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.850391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.850425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.850577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.850615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.850750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.850782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.851073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.851109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.851229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.851262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.851465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.851498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.851753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.851787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.852109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.852142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.852419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.852456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.852710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.852742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.853019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.853052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.853333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.853365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.853703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.853736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.853960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.853995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.854258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.854290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.854524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.854558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.854812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.854844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.855044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.855075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.855280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.855311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.855520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.855557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.855755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.855786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.855924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.855956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.856206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.856238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.856497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.856534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.856662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.856693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.856913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.856946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.857166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.857199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.857500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.857534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-12-05 21:21:45.857820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.955 [2024-12-05 21:21:45.857852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.858134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.858166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.858449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.858481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.858737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.858769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.859049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.859082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.859331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.859362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.859507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.859539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.859742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.859775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.860124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.860157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.860397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.860430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.860751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.860783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.861060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.861092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.861388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.861423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.861608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.861646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.861878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.861914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.862226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.862258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.862482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.862516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.862779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.862812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.863016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.863049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.863233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.863265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.863406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.863437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.863636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.863667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.863799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.863829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.864045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.864080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.864284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.864318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.864537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.864571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.864752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.864786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.864991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.865026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.865181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.865213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.865501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.865534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.865659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.865690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.865824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.865857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.866164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.866196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.866401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.956 [2024-12-05 21:21:45.866434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.956 qpair failed and we were unable to recover it. 00:28:37.956 [2024-12-05 21:21:45.866653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.866688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.866891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.866923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.867167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.867200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.867399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.867434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.867651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.867685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.867903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.867936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.868307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.868340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.868653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.868685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.868813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.868846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.868985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.869017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.869216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.869248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.869531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.869565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.869708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.869740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.870052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.870085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.870239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.870272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.870553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.870587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.870794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.870827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.871037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.871068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.871288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.871320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.871555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.871594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.871792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.871824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.872081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.872113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.872466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.872499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.872695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.872728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.872935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.872970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.873318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.873351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.873655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.873691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.873912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.873950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.874205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.874239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.874473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.874507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.874717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.874750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.874897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.874929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.875213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.875246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.875522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.875555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.875835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.875867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.876221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.876254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.876535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.876571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.876827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.957 [2024-12-05 21:21:45.876859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.957 qpair failed and we were unable to recover it. 00:28:37.957 [2024-12-05 21:21:45.877159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.877191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.877418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.877455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.877665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.877696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.877955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.877990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.878257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.878290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.878496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.878531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.878692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.878728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.879018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.879050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.879248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.879280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.879493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.879526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.879727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.879760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.879897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.879928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.880240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.880271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.880489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.880522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.880726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.880760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.880916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.880948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.881147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.881179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.881394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.881428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.881661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.881694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.881841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.881872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.882084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.882116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.882350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.882401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.882590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.882621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.882816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.882850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.883052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.883084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.883355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.883418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.883705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.883737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.884017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.884049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.884246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.884279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.884464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.884498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.884639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.884672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.884925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.884962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.885270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.885304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.885562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.885595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.885736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.885767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.885971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.886003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.886222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.886255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.886440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.886473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.958 qpair failed and we were unable to recover it. 00:28:37.958 [2024-12-05 21:21:45.886668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.958 [2024-12-05 21:21:45.886700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.886910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.886941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.887219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.887251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.887497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.887531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.887678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.887711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.887894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.887927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.888232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.888264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.888502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.888537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.888749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.888781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.888911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.888944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.889162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.889194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.889521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.889555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.889761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.889793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.890074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.890106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.890318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.890351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.890591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.890624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.890755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.890787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.890991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.891024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.891206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.891239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.891475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.891509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.891790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.891822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.892040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.892071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.892250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.892283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.892486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.892526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.892850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.892882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.893215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.893249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.893497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.893531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.893753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.893785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.894070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.894102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.894309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.894340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.894512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.894544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.894807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.894841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.895123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.895158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.895313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.895344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.895579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.895613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.895767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.895797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.896072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.896105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.896239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.896271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.959 qpair failed and we were unable to recover it. 00:28:37.959 [2024-12-05 21:21:45.896453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.959 [2024-12-05 21:21:45.896487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.896830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.896865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.897161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.897193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.897461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.897495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.897716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.897748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.898020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.898051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.898346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.898389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.898655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.898688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.898908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.898941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.899182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.899214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.899488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.899522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.899735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.899767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.900058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.900138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.900395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.900436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.900588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.900623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.901185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.901225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.901536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.901576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.901875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.901908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.902195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.902228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.902424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.902461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.902720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.902755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.903035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.903069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.903386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.903421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.903588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.903621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.903828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.903861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.904062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.904095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.904386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.904424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.904634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.904668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.904867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.904901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.905099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.905133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.905327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.905360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.905598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.905632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.905835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.905869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.906173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.906206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.960 [2024-12-05 21:21:45.906452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.960 [2024-12-05 21:21:45.906484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.960 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.906697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.906732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.907000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.907031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.907314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.907348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.907556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.907590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.907810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.907853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.908005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.908038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.908289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.908321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.908599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.908631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.908762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.908795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.909053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.909088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.909292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.909326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.909615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.909649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.909842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.909874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.910162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.910196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.910426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.910460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.910695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.910728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.910940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.910975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.911238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.911272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.911539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.911575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.911776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.911810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.912002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.912036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.912290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.912323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.912520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.912556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.912814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.912846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.913144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.913176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.913385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.913421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.913619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.913654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.913905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.913937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.914151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.914186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.914467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.914500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.914708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.914741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.914945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.914984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.915129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.915163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.915453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.915488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.915714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.915749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.915957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.915989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.916246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.961 [2024-12-05 21:21:45.916280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.961 qpair failed and we were unable to recover it. 00:28:37.961 [2024-12-05 21:21:45.916551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.916584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.916791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.916824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.917030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.917064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.917340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.917384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.917592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.917625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.917885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.917917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.918130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.918164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.918284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.918317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.918533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.918569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.918779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.918813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.919013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.919044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.919322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.919355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.919576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.919609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.919881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.919913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.920197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.920229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.920453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.920486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.920761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.920794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.920992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.921024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.921288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.921320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.921445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.921480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.921664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.921697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.921938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.921971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.922277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.922310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.922465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.922500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.922657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.922690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.922993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.923027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.923283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.923316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.923629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.923662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.923821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.923853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.924081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.924116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.924392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.924425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.924660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.924695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.924876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.924911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.925200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.925235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.925499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.925534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.925741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.925779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.926042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.926076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.926289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.926320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.926491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.962 [2024-12-05 21:21:45.926526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.962 qpair failed and we were unable to recover it. 00:28:37.962 [2024-12-05 21:21:45.926736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.926769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.926993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.927026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.927247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.927278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.927503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.927537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.927765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.927797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.928002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.928034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.928249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.928284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.928490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.928524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.928775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.928809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.928994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.929026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.929242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.929275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.929421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.929455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.929608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.929640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.929842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.929874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.930054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.930086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.930340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.930380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.930522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.930559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.930675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.930707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.930840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.930872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.931133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.931166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.931422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.931455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.931669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.931702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.931897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.931929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.932081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.932113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.932313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.932347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.932547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.932581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.932766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.932801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.933003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.933037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.933183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.933215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.933398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.933432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.933566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.933598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.933793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.933825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.934023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.934055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.934313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.934345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.934552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.934585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.934779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.934812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.934998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.935032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.935245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.935278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.963 qpair failed and we were unable to recover it. 00:28:37.963 [2024-12-05 21:21:45.935407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.963 [2024-12-05 21:21:45.935443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.935644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.935677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.935799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.935830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.936037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.936070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.936263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.936296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.936494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.936545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.936680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.936713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.937009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.937042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.937169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.937201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.937339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.937387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.937650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.937682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.937869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.937900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.938179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.938210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.938501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.938536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.938721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.938752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.939113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.939145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.939443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.939476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.939633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.939665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.939939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.939971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.940222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.940253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.940496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.940529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.940782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.940814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.941161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.941194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.941388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.941422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.941629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.941661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.941812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.941845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.942160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.942198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.942403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.942436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.942716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.942748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.942898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.942930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.943182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.943214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.943438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.943471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.943675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.943707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.943943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.943975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.944252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.944284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.944429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.944463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.944717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.944750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.944954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.944985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.945281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.964 [2024-12-05 21:21:45.945313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.964 qpair failed and we were unable to recover it. 00:28:37.964 [2024-12-05 21:21:45.945522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.945554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.945788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.945821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.946042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.946074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.946200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.946232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.946498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.946531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.946712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.946744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.946868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.946899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.947163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.947194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.947394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.947427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.947623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.947656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.947909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.947940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.948222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.948255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.948488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.948522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.948808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.948841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.949149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.949181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.949387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.949421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.949643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.949676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.949876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.949907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.950064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.950096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.950347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.950387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.950536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.950567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.950830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.950863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.951089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.951120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.951274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.951307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.951650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.951684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.951913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.951945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.952222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.952253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.952560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.952592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.952780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.952819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.953103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.953135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.953403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.953435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.953669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.953701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.953904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.953936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.954220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.954251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.965 qpair failed and we were unable to recover it. 00:28:37.965 [2024-12-05 21:21:45.954522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.965 [2024-12-05 21:21:45.954556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.954760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.954793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.955046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.955079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.955228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.955259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.955454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.955487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.955786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.955818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.956147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.956178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.956461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.956494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.956699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.956732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.956999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.957030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.957285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.957318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.957513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.957547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.957709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.957741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.958058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.958090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.958198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.958230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.958432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.958464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.958666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.958697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.958900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.958934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.959229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.959262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.959464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.959496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.959656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.959688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.959896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.959934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.960237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.960269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.960504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.960538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.960739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.960770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.960964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.960997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.961205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.961238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.963464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.963527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.963839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.963875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.964070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.964103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.964311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.964344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.964515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.964548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.964735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.964767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.964967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.965000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.965277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.965309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.965513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.965545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.965751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.965785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.966062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.966092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.966384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.966 [2024-12-05 21:21:45.966417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.966 qpair failed and we were unable to recover it. 00:28:37.966 [2024-12-05 21:21:45.966631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.966663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.966873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.966905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.967196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.967228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.967535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.967569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.967832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.967865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.968072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.968104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.968284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.968315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.968588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.968621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.968809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.968840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.969022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.969053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.969343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.969388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.969541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.969573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.969684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.969716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.969945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.969976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.970121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.970153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.970344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.970403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.970629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.970660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.970816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.970848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.971038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.971069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.971322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.971353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.971644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.971678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.972001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.972035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.972297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.972330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.972621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.972660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.972940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.972972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.973254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.973286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.973410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.973444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.973584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.973615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.973869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.973900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.974131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.974164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.974358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.974403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.974662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.974693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.974995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.975027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.975294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.975325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.975489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.975523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.975724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.975757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.975966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.975997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.976204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.976236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.967 [2024-12-05 21:21:45.976473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.967 [2024-12-05 21:21:45.976506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.967 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.976643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.976675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.976824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.976856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.977151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.977183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.977337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.977385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.977587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.977619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.977767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.977799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.978071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.978102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.978244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.978277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.978509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.978543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.978796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.978827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.978949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.978980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.979256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.979294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.979441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.979474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.979752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.979784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.980109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.980141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.980441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.980474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.980631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.980663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.980863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.980895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.981098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.981131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.981337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.981379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.981635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.981669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.981951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.981984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.982189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.982221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.982501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.982536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.982693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.982723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.982980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.983012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.983292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.983324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.983651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.983685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.983890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.983923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.984128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.984159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.984445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.984478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.984629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.984661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.984815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.984848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.985047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.985079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.985353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.985397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.985533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.968 [2024-12-05 21:21:45.985565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.968 qpair failed and we were unable to recover it. 00:28:37.968 [2024-12-05 21:21:45.985772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.985804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.985980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.986012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.986297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.986328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.986626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.986660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.986805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.986837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.987105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.987137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.987273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.987306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.987523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.987556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.987775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.987808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.988034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.988067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.988249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.988281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.988493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.988527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.988798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.988830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.989133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.989165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.989432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.989464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.989614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.989645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.989787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.989825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.989937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.989969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.990240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.990272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.990495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.990528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.990682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.990713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.991008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.991039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.991321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.991353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.991500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.991533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.991686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.991718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.991928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.991961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.992215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.992247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.992544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.992578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.992845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.992878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.993096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.993128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.993420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.993454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.993609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.993641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.993843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.993874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.994002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.994033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.994227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.994258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.994525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.994559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.994708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.994740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.994957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.994989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.969 [2024-12-05 21:21:45.995265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.969 [2024-12-05 21:21:45.995297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.969 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.995534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.995567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.995771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.995804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.995925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.995957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.996159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.996191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.996396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.996436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.996647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.996678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.996821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.996853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.997146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.997179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.997378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.997412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.997526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.997557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.997699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.997731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.997908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.997941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.998157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.998189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.998382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.998416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.998541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.998573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.998778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.998810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.999128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.999160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.999295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.999327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.999494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.999529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.999688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.999720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:45.999864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:45.999895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:46.000229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:46.000261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:46.000409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:46.000442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:46.000651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:46.000685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:46.000877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:46.000909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:46.001019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:46.001050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:46.001271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:46.001303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:46.001521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:46.001553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:46.001739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:46.001771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:46.002022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:46.002054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:46.002253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:46.002285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:46.002442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:46.002475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:46.002739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.970 [2024-12-05 21:21:46.002772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.970 qpair failed and we were unable to recover it. 00:28:37.970 [2024-12-05 21:21:46.002980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.003012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.003291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.003322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.003583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.003616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.003913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.003945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.004143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.004176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.004391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.004425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.004679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.004711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.004892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.004925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.005117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.005149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.005299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.005331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.005452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.005486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.005693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.005725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.005919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.005957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.006227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.006261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.006461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.006494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.006770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.006803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.007069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.007101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.007412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.007445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.007720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.007753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.007904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.007935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.008162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.008195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.008424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.008457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.008684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.008716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.008857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.008889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.009169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.009201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.009474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.009506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.009730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.009762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.010045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.010078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.010330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.010362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.010674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.010707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.010904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.010936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.011191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.011222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.011504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.011538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.011684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.011717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.011906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.011937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.012139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.012172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.012489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.012523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.012755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.012787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.971 qpair failed and we were unable to recover it. 00:28:37.971 [2024-12-05 21:21:46.012988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.971 [2024-12-05 21:21:46.013020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.013208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.013240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.013508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.013542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.013735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.013767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.013962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.013994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.014249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.014281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.014569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.014602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.014816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.014848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.014999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.015030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.015223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.015254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.015517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.015551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.015760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.015791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.016065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.016098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.016331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.016364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.016609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.016642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.016848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.016881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.017074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.017107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.017298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.017329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.017566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.017600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.017802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.017835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.017976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.018008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.018239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.018271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.018416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.018450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.018657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.018689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.018913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.018945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.019133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.019164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.019420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.019453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.019741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.019773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.020019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.020051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.020242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.020277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.020573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.020606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.020878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.020911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.021229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.021261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.021418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.021450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.021658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.021690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.021902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.021933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.022243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.972 [2024-12-05 21:21:46.022276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.972 qpair failed and we were unable to recover it. 00:28:37.972 [2024-12-05 21:21:46.022416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.022449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.022604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.022636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.022779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.022811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.023111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.023142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.023436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.023469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.023652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.023690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.023898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.023930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.024133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.024166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.024443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.024476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.024609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.024641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.024786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.024818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.024925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.024957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.025238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.025270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.025488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.025520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.025747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.025779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.026042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.026074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.026387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.026420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.026670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.026702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.026875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.026907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.027176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.027208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.027507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.027541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.027757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.027789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.027925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.027956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.028146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.028177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.028434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.028468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.028753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.028784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.029010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.029042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.029274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.029306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.029559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.029592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.029857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.029889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.030034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.030065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.030263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.030294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.030547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.030582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.030884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.030916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.031169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.031201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.031508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.031542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.031753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.031785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.031986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.973 [2024-12-05 21:21:46.032018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.973 qpair failed and we were unable to recover it. 00:28:37.973 [2024-12-05 21:21:46.032219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.974 [2024-12-05 21:21:46.032251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.974 qpair failed and we were unable to recover it. 00:28:37.974 [2024-12-05 21:21:46.032515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.974 [2024-12-05 21:21:46.032548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.974 qpair failed and we were unable to recover it. 00:28:37.974 [2024-12-05 21:21:46.032752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.974 [2024-12-05 21:21:46.032784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.974 qpair failed and we were unable to recover it. 00:28:37.974 [2024-12-05 21:21:46.032983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.974 [2024-12-05 21:21:46.033015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:37.974 qpair failed and we were unable to recover it. 00:28:38.250 [2024-12-05 21:21:46.033314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.250 [2024-12-05 21:21:46.033346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.250 qpair failed and we were unable to recover it. 00:28:38.250 [2024-12-05 21:21:46.033583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.250 [2024-12-05 21:21:46.033615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.250 qpair failed and we were unable to recover it. 00:28:38.250 [2024-12-05 21:21:46.033870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.250 [2024-12-05 21:21:46.033903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.250 qpair failed and we were unable to recover it. 00:28:38.250 [2024-12-05 21:21:46.034112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.250 [2024-12-05 21:21:46.034145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.250 qpair failed and we were unable to recover it. 00:28:38.250 [2024-12-05 21:21:46.034435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.250 [2024-12-05 21:21:46.034470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.250 qpair failed and we were unable to recover it. 00:28:38.250 [2024-12-05 21:21:46.034745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.250 [2024-12-05 21:21:46.034777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.250 qpair failed and we were unable to recover it. 00:28:38.250 [2024-12-05 21:21:46.034983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.250 [2024-12-05 21:21:46.035016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.250 qpair failed and we were unable to recover it. 00:28:38.250 [2024-12-05 21:21:46.035260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.250 [2024-12-05 21:21:46.035292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.250 qpair failed and we were unable to recover it. 00:28:38.250 [2024-12-05 21:21:46.035511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.250 [2024-12-05 21:21:46.035544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.250 qpair failed and we were unable to recover it. 00:28:38.250 [2024-12-05 21:21:46.035750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.250 [2024-12-05 21:21:46.035783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.250 qpair failed and we were unable to recover it. 00:28:38.250 [2024-12-05 21:21:46.036040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.250 [2024-12-05 21:21:46.036072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.250 qpair failed and we were unable to recover it. 00:28:38.250 [2024-12-05 21:21:46.036221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.250 [2024-12-05 21:21:46.036252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.250 qpair failed and we were unable to recover it. 00:28:38.250 [2024-12-05 21:21:46.036545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.250 [2024-12-05 21:21:46.036578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.250 qpair failed and we were unable to recover it. 00:28:38.250 [2024-12-05 21:21:46.036780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.250 [2024-12-05 21:21:46.036812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.036968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.036999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.037208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.037240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.037465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.037499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.037769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.037801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.038020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.038052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.038268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.038300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.038585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.038618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.038872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.038904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.039120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.039152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.039447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.039480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.039694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.039726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.040015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.040047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.040324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.040356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.040596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.040631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.040854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.040886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.041229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.041261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.041475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.041509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.041646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.041683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.041891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.041922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.042048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.042079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.042332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.042364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.042590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.042623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.042847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.042878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.043098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.043130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.043381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.043416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.043622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.043653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.043860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.043893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.044198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.044230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.044499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.044533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.044806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.044839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.045068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.045100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.045380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.045414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.251 [2024-12-05 21:21:46.045631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.251 [2024-12-05 21:21:46.045663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.251 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.045857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.045889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.046091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.046123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.046413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.046446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.046609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.046643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.046942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.046974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.047107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.047138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.047395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.047427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.047635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.047666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.047821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.047853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.048143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.048175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.048380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.048415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.048624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.048657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.048899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.048931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.049135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.049167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.049465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.049499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.049698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.049729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.049933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.049964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.050250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.050281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.050492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.050526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.050745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.050776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.050978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.051010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.051210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.051242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.051531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.051563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.051705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.051737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.051939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.051971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.052252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.052290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.052533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.052566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.052770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.052802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.053011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.053043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.252 [2024-12-05 21:21:46.053245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.252 [2024-12-05 21:21:46.053277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.252 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.053557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.053590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.053795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.053828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.054033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.054066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.054341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.054380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.054583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.054616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.054871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.054904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.055114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.055147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.055418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.055451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.055647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.055680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.055840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.055871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.056138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.056170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.056446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.056479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.056772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.056805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.056942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.056974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.057196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.057228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.057433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.057467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.057659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.057690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.057877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.057908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.058194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.058226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.058460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.058492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.058693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.058724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.058874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.058906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.059129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.059166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.059436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.059469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.059670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.059701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.059904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.253 [2024-12-05 21:21:46.059935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.253 qpair failed and we were unable to recover it. 00:28:38.253 [2024-12-05 21:21:46.060133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.060164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.060445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.060478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.060730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.060764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.061024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.061057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.061252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.061285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.061484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.061517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.061635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.061667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.061882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.061914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.062237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.062268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.062485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.062518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.062735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.062768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.063076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.063108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.063390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.063424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.063671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.063704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.064025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.064056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.064277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.064308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.064611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.064645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.064912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.064944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.065218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.065250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.065566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.065599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.065747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.065779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.066034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.066066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.066339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.066384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.066521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.066553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.066716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.066748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.066940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.066972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.067201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.067232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.067524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.067556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.254 qpair failed and we were unable to recover it. 00:28:38.254 [2024-12-05 21:21:46.067835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.254 [2024-12-05 21:21:46.067866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.068115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.068147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.068255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.068286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.068494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.068527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.068800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.068831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.068977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.069009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.069261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.069293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.069597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.069630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.069828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.069860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.070095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.070132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.070411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.070444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.070729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.070762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.070978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.071009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.071170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.071202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.071499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.071531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.071805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.071837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.072131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.072162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.072364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.072412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.072690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.072723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.072932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.072963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.073157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.073189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.073400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.073434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.073630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.073662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.073896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.073927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.074126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.074158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.074353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.074398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.074677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.074708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.074903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.255 [2024-12-05 21:21:46.074935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.255 qpair failed and we were unable to recover it. 00:28:38.255 [2024-12-05 21:21:46.075131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.075162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.075436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.075469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.075623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.075654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.075953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.075985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.076266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.076297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.076548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.076581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.076796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.076829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.077121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.077152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.077406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.077445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.077699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.077732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.077988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.078019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.078269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.078300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.078500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.078534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.078817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.078849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.079068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.079100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.079294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.079325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.079587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.079620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.079844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.079875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.080140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.080172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.080504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.080540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.080818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.080849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.256 [2024-12-05 21:21:46.081063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.256 [2024-12-05 21:21:46.081095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.256 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.081326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.081360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.081588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.081620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.081899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.081931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.082184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.082216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.082419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.082452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.082647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.082679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.082803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.082836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.082994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.083025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.083163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.083194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.083447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.083479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.083681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.083713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.083913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.083944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.084140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.084171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.084397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.084431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.084742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.084776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.085050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.085081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.085400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.085433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.085565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.085596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.085799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.085832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.086019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.086050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.086383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.086417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.086621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.086652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.086903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.086935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.087139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.087172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.087400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.087434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.087692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.087724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.257 [2024-12-05 21:21:46.087909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.257 [2024-12-05 21:21:46.087942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.257 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.088123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.088161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.088346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.088389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.088645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.088677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.088893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.088925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.089188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.089220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.089468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.089501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.089701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.089733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.089942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.089974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.090259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.090292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.090490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.090523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.090803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.090836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.090983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.091014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.091328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.091360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.091580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.091613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.091827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.091859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.092071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.092103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.092354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.092400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.092603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.092635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.092888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.092920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.093154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.093187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.093497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.093531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.093789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.093822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.094047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.094079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.094293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.094327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.094533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.094568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.094823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.094855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.095155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.095186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.095391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.095430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.095711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.258 [2024-12-05 21:21:46.095743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.258 qpair failed and we were unable to recover it. 00:28:38.258 [2024-12-05 21:21:46.095949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.095980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.096181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.096213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.096606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.096642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.096797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.096830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.097048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.097081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.097389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.097423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.097629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.097662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.097815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.097846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.098068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.098101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.098303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.098335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.098491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.098523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.098712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.098744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.099098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.099179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.099448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.099491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.099688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.099723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.099886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.099922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.100124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.100158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.100443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.100478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.100684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.100717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.100862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.100895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.101156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.101189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.101468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.101503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.101785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.101818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.101949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.101982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.102255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.102287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.102560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.102604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.102883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.102915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.103061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.103094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.103306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.103340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.259 [2024-12-05 21:21:46.103634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.259 [2024-12-05 21:21:46.103666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.259 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.103935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.103966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.104201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.104233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.104434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.104468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.104664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.104697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.104911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.104942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.105171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.105203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.105403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.105436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.105659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.105691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.105878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.105911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.106172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.106204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.106406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.106439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.106721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.106753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.107029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.107062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.107261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.107294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.107522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.107554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.107856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.107890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.108155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.108187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.108375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.108408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.108700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.108733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.108877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.108909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.109129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.109160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.109438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.109471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.109734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.109769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.110005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.110038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.110300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.110332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.110546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.260 [2024-12-05 21:21:46.110579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.260 qpair failed and we were unable to recover it. 00:28:38.260 [2024-12-05 21:21:46.110789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.110821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.111071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.111103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.111379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.111411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.111616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.111648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.111923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.111956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.112241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.112273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.112581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.112613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.112866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.112898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.113113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.113144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.113397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.113437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.113586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.113620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.113763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.113795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.113990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.114022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.114275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.114307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.114452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.114487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.114625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.114658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.114906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.114938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.115144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.115176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.115300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.115332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.115602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.115637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.115914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.115946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.116253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.116285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.116500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.116534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.116774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.116807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.117006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.117039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.117229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.117263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.117481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.117516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.117739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.261 [2024-12-05 21:21:46.117772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.261 qpair failed and we were unable to recover it. 00:28:38.261 [2024-12-05 21:21:46.118030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.118063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.118305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.118337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.118476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.118508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.118702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.118737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.118900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.118933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.119189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.119223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.119429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.119462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.119678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.119714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.119931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.119968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.120241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.120274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.120481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.120515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.120661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.120696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.120826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.120860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.121119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.121153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.121404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.121438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.121644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.121679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.121807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.121840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.122076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.122110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.122310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.122344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.122583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.122617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.122824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.122857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.123079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.123113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.123320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.123354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.123607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.123640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.123942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.123975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.124259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.262 [2024-12-05 21:21:46.124293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.262 qpair failed and we were unable to recover it. 00:28:38.262 [2024-12-05 21:21:46.124528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.124565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.124688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.124721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.124864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.124897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.125126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.125159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.125354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.125398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.125614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.125648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.125845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.125878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.126083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.126115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.126440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.126477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.126728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.126762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.127032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.127066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.127278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.127312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.127536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.127573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.127708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.127740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.128016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.128049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.128308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.128341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.128496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.128529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.128808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.128841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.129082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.129114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.129326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.129358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.129495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.129528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.129708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.129742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.130039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.130077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.130332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.130365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.130638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.130671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.130808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.130841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.131066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.131098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.131392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.263 [2024-12-05 21:21:46.131426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.263 qpair failed and we were unable to recover it. 00:28:38.263 [2024-12-05 21:21:46.131635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.131668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.131885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.131919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.132109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.132141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.132358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.132404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.132576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.132611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.132817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.132853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.133129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.133162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.133393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.133427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.133657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.133689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.133833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.133868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.134103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.134137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.134401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.134437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.134712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.134745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.134899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.134931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.135060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.135095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.135410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.135445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.135652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.135685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.135953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.135985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.136139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.136173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.136485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.136521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.136776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.136809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.137033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.137065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.137318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.137352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.137509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.137543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.137700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.137732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.137990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.138023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.138292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.138324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.138493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.138527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.138648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.138682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.264 qpair failed and we were unable to recover it. 00:28:38.264 [2024-12-05 21:21:46.138887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.264 [2024-12-05 21:21:46.138921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.139184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.139218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.139429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.139463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.139666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.139699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.139891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.139925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.140062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.140102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.140299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.140333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.140559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.140593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.140732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.140765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.140961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.140994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.141280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.141313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.141547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.141580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.141723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.141756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.141897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.141931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.142226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.142258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.142517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.142551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.142836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.142870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.143085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.143121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.143347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.143406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.143647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.143680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.143889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.143922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.144140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.144173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.144315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.144348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.144589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.144622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.144805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.144837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.145088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.145122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.145425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.145459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.145657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.145692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.145888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.145922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.265 qpair failed and we were unable to recover it. 00:28:38.265 [2024-12-05 21:21:46.146129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.265 [2024-12-05 21:21:46.146162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.146483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.146519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.146707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.146741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.146906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.146939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.147157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.147192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.147393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.147427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.147540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.147574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.147729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.147761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.147889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.147923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.148133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.148170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.148471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.148505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.148650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.148684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.148824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.148858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.149091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.149125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.149330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.149362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.149526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.149561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.149768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.149808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.150070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.150102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.150400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.150434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.150704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.150738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.150885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.150917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.151141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.151176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.151384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.151418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.151699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.151732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.151881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.151913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.152209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.152241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.152504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.152538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.152797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.152830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.153167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.153199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.153407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.153442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.153727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.153759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.154086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.154119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-12-05 21:21:46.154423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.266 [2024-12-05 21:21:46.154456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.154649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.154682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.154957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.154989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.155145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.155178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.155426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.155460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.155747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.155780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.156005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.156037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.156315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.156348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.156516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.156551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.156757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.156789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.156988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.157020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.157299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.157331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.157560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.157593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.157747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.157780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.158039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.158071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.158344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.158403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.158706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.158739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.158930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.158962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.159202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.159234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.159436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.159470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.159669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.159702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.159896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.159929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.160066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.160099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.160405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.160438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.160689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.160726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.160930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.160962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.161173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.161205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.161402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.161435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.161728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.161761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.161946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.161979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.162195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.267 [2024-12-05 21:21:46.162227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-12-05 21:21:46.162510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.162544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.162654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.162686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.162893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.162926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.163129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.163161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.163293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.163326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.163546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.163579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.163713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.163746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.164032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.164064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.164330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.164363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.164590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.164622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.164812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.164844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.165089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.165121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.165384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.165418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.165692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.165725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.165986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.166018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.166317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.166348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.166518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.166553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.166769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.166800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.166926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.166957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.167158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.167191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.167417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.167451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.167704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.167737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-12-05 21:21:46.167933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.268 [2024-12-05 21:21:46.167965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.168265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.168299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.168555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.168589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.170229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.170291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.170580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.170616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.170902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.170938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.171246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.171279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.171546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.171578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.171832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.171865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.172178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.172210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.172508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.172543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.172755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.172795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.172932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.172963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.173167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.173199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.173454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.173487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.173620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.173653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.173794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.173826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.173969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.174002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.174261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.174294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.174511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.174544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.174818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.174850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.175118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.175150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.175344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.175387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.175676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.175709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.175867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.175898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.176139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.176172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.176285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.176319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-12-05 21:21:46.176587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.269 [2024-12-05 21:21:46.176622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.176847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.176880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.177119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.177153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.177420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.177453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.177713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.177747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.177941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.177973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.178226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.178259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.178545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.178578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.178808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.178841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.179102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.179137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.179377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.179412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.179702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.179736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.179931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.179963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.180226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.180258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.180466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.180500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.180710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.180746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.180898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.180932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.181151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.181183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.181333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.181365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.181649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.181684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.181880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.181911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.182062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.182094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.182310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.182342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.182622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.182656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.182850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.182888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.183187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.183220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.183378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.183413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.183611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.183643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.270 [2024-12-05 21:21:46.183785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.270 [2024-12-05 21:21:46.183817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.270 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.184064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.184098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.184387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.184420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.184614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.184648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.184873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.184904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.185153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.185184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.185399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.185432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.185567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.185599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.185817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.185849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.186043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.186076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.186316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.186349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.186569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.186603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.186755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.186787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.187009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.187040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.187295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.187328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.187491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.187523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.187775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.187807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.188124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.188157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.188362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.188402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.188677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.188709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.188902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.188935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.189194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.189226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.189443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.189477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.189625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.189658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.189850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.189882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.190197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.190227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.190467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.190502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.190776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.190809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.191147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.191179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.191491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.191525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.191664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.191696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.191848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.191881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.191988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.192020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.271 qpair failed and we were unable to recover it. 00:28:38.271 [2024-12-05 21:21:46.192255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.271 [2024-12-05 21:21:46.192287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.192486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.192519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.192704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.192736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.192938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.192977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.193157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.193188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.193461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.193495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.193719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.193752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.193895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.193926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.194143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.194176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.194384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.194417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.194562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.194594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.194747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.194779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.195017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.195048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.195242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.195274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.195468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.195501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.195724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.195756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.197311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.197364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.197683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.197719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.197918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.197950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.198150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.198183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.198393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.198427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.198656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.198688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.198889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.198921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.199225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.199256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.199455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.199489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.199633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.199666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.199943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.199975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.200307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.272 [2024-12-05 21:21:46.200340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.272 qpair failed and we were unable to recover it. 00:28:38.272 [2024-12-05 21:21:46.200545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.200579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.200730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.200762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.200962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.200997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.201250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.201283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.201511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.201545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.201689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.201722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.201871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.201903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.202198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.202231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.202442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.202477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.202680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.202713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.202850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.202883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.203021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.203067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.203211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.203245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.203393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.203426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.203564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.203599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.203748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.203788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.204044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.204076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.204209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.204243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.204443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.204476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.204627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.204660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.204788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.204820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.205049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.205082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.205339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.205383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.205542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.205575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.205778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.205810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.206067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.206100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.206315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.206347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.206526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.206560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.206787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.206819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.207042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.273 [2024-12-05 21:21:46.207076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.273 qpair failed and we were unable to recover it. 00:28:38.273 [2024-12-05 21:21:46.207325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.207357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.207524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.207558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.207762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.207795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.207941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.207973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.208245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.208278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.208492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.208526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.208714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.208748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.208991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.209054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.209321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.209351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.209635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.209665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.209818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.209846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.209986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.210013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.210216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.210246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.210493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.210526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.210673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.210702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.210843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.210874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.211157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.211187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.211466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.211498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.211627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.211655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.211861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.211891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.212185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.212216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.212400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.212430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.212632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.212662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.212852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.212880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.213098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.213128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.213283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.213312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.213536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.213566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.213716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.213744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.213940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.213972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.216385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.216428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.216641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.216669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.216940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.216969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.217170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.274 [2024-12-05 21:21:46.217199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.274 qpair failed and we were unable to recover it. 00:28:38.274 [2024-12-05 21:21:46.217402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.217431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.217632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.217662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.217855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.217883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.218015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.218042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.218222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.218249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.218452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.218483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.218681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.218716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.218865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.218893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.219085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.219119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.219219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.219238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.219496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.219518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.219636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.219658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.219850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.219871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.220088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.220107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.220276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.220295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.220499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.220519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.220636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.220656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.220880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.220899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.221167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.221186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.221348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.221372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.221573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.221594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.221771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.221790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.221912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.221931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.222039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.222056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.222221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.222239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.222489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.222510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.222713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.222733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.222900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.222919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.223099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.223119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.223342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.223363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.223490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.223509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.223754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.223775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.223890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.223910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.224156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.224177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.224377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.224397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.224516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.224535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.224673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.224693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.224787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.275 [2024-12-05 21:21:46.224806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.275 qpair failed and we were unable to recover it. 00:28:38.275 [2024-12-05 21:21:46.224971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.224990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.225171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.225190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.225393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.225415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.225580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.225598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.225767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.225787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.225965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.225984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.226170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.226203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.226407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.226439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.226593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.226628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.226761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.226799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.226941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.226977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.227163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.227196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.227341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.227362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.227478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.227497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.227671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.227693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.227869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.227890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.228097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.228116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.228380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.228400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.228538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.228558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.228737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.228777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.228958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.228983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.229244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.229272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.229516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.229546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.229731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.229756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.229937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.229965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.230202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.230230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.230378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.230405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.230602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.230628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.230823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.230849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.231080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.231105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.231302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.231327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.231532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.231558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.231744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.231769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.231905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.231931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.232132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.232158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.232291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.232316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.232425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.232455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.232575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.276 [2024-12-05 21:21:46.232601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.276 qpair failed and we were unable to recover it. 00:28:38.276 [2024-12-05 21:21:46.232806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.232831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.233067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.233092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.233274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.233300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.233397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.233425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.233612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.233639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.233751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.233775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.233972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.233998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.234187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.234215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.234423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.234450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.234544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.234569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.234746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.234772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.234912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.234938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.235128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.235154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.235422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.235448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.235639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.235664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.235779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.235804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.235992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.236018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.236293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.236319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.236586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.236612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.236807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.236833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.237039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.237065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.237320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.237344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.237473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.237500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.237698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.237724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.237914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.237939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.238224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.238250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.238542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.238569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.238776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.238802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.238990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.239016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.239194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.239226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.239503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.239536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.239677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.239709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.239861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.239893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.240171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.240203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.240419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.240454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.240760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.240794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.241017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.241051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.241340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.241382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.277 [2024-12-05 21:21:46.241594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.277 [2024-12-05 21:21:46.241627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.277 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.241782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.241821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.242052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.242084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.242360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.242404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.242601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.242635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.242797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.242830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.243090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.243123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.243394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.243428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.243726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.243762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.243955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.243989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.244220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.244252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.244471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.244507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.244646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.244682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.244885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.244919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.245268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.245302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.245587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.245623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.245832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.245865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.246095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.246128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.246423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.246457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.246608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.246641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.246841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.246874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.247170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.247202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.247355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.247400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.247655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.247688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.247963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.247997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.248216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.248248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.248416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.248450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.248654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.248689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.248845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.248886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.249097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.249132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.249340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.249385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.249622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.249657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.249862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.249896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.250191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.250224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.250471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.250507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.250638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.250670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.250928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.250961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.251235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.251268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.278 [2024-12-05 21:21:46.251419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.278 [2024-12-05 21:21:46.251452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.278 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.251667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.251699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.251904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.251937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.252216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.252249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.252311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bdb20 (9): Bad file descriptor 00:28:38.279 [2024-12-05 21:21:46.252630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.252682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.252861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.252884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.253001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.253022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.253202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.253220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.253330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.253348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.253569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.253589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.253760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.253781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.254022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.254049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.254227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.254248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.254428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.254453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.254649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.254672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.254853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.254876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.255000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.255020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.255284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.255306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.255469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.255489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.255700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.255719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.255833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.255851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.255964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.255982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.256172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.256190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.256383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.256405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.256626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.256648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.256774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.256792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.256907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.256925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.257163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.257186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.257440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.257471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.257588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.257610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.257795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.257824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.258083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.258107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.258221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.258242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.279 qpair failed and we were unable to recover it. 00:28:38.279 [2024-12-05 21:21:46.258436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.279 [2024-12-05 21:21:46.258463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.258724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.258745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.258911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.258928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.259196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.259215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.259417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.259432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.259603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.259617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.259731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.259745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.259853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.259868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.260062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.260080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.260243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.260257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.260435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.260450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.260624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.260638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.260803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.260819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.261047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.261071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.261303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.261321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.261547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.261568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.261735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.261751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.261912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.261927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.262086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.262101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.262341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.262354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.262534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.262610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.262879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.262957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.263278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.263316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.263555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.263588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.263749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.263782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.263968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.264000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.264216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.264248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.264472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.264506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.264693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.264725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.264878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.264911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.265201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.265219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.265381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.265400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.265574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.265589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.265749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.265764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.265950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.265966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.266138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.266161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.266334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.266350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.266486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.266510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.280 qpair failed and we were unable to recover it. 00:28:38.280 [2024-12-05 21:21:46.266634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.280 [2024-12-05 21:21:46.266650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.266769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.266785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.266987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.267003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.267176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.267192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.267356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.267378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.267539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.267554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.267714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.267730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.267884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.267902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.268066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.268084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.268234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.268248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.268343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.268356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.268551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.268566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.268794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.268814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.268948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.268966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.269209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.269226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.269398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.269420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.269711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.269732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.270008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.270023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.270244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.270257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.270456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.270471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.270709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.270727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.270844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.270862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.270963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.270980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.271140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.271158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.271321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.271340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.271460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.271479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.271649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.271663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.271810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.271825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.271914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.271926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.272092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.272107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.272336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.272359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.272484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.272502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.272672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.272689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.272793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.272809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.273002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.273019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.273268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.273288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.273396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.273413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.273561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.281 [2024-12-05 21:21:46.273575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.281 qpair failed and we were unable to recover it. 00:28:38.281 [2024-12-05 21:21:46.273665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.273678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.273945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.273967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.274214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.274235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.274484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.274505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.274720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.274738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.274862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.274879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.275202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.275223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.275378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.275394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.275597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.275611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.275702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.275715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.275878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.275899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.276138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.276155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.276318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.276335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.276575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.276598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.276784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.276801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.276922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.276938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.277116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.277130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.277274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.277288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.277501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.277520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.277605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.277619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.277707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.277722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.277889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.277906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.278009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.278023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.278177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.278193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.278379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.278399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.278582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.278599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.278774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.278793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.278909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.278923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.279099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.279113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.279330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.279345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.279518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.279540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.279643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.279659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.279893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.279910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.280189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.280208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.280388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.280406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.280647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.280665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.280801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.280814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.280914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.282 [2024-12-05 21:21:46.280928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.282 qpair failed and we were unable to recover it. 00:28:38.282 [2024-12-05 21:21:46.281183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.281201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.281319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.281336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.281523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.281541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.281699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.281720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.281940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.281959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.282211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.282230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.282421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.282439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.282679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.282693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.282796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.282810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.283051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.283072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.283164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.283179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.283477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.283496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.283679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.283695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.283779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.283793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.283906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.283921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.284084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.284101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.284284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.284298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.284473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.284487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.284628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.284642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.284810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.284831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.284938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.284954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.285131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.285147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.285292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.285308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.285469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.285488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.285753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.285773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.285941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.285959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.286236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.286251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.286463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.286483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.286600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.286617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.286782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.286798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.287041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.287058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.287205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.287222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.283 [2024-12-05 21:21:46.287438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.283 [2024-12-05 21:21:46.287458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.283 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.287682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.287702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.287827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.287840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.288033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.288046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.288277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.288297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.288485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.288505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.288613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.288628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.288848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.288865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.289120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.289141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.289242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.289257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.289425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.289445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.289694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.289715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.289889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.289906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.290145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.290165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.290406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.290423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.290599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.290614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.290806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.290824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.291011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.291029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.291217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.291234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.291412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.291430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.291598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.291629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.291798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.291817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.292040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.292056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.292247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.292261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.292488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.292508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.292702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.292720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.292831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.292846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.293084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.293102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.293324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.293343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.293554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.293573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.293730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.293744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.293860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.293875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.294204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.294227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.294484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.294505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.294695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.294711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.294809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.294825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.294941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.294957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.295219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.295237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.284 [2024-12-05 21:21:46.295348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.284 [2024-12-05 21:21:46.295375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.284 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.295505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.295518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.295681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.295699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.295865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.295883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.296133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.296150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.296316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.296333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.296519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.296540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.296663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.296681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.296789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.296805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.297083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.297098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.297235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.297248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.297463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.297485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.297703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.297721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.297882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.297899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.298004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.298020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.298270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.298292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.298456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.298476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.298639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.298653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.298814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.298828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.299003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.299019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.299265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.299285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.299523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.299543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.299664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.299680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.299893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.299914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.300158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.300176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.300426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.300441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.300554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.300569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.300748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.300766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.300944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.300961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.301110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.301127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.301366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.301405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.301598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.301615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.301785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.301802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.301969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.301982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.302167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.302181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.302283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.302298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.302488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.302509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.302623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.302638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.285 [2024-12-05 21:21:46.302850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.285 [2024-12-05 21:21:46.302867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.285 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.303048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.303063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.303241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.303265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.303505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.303523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.303741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.303756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.303916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.303932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.304110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.304128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.304287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.304302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.304542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.304562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.304660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.304675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.304835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.304852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.305107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.305126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.305286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.305299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.305453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.305468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.305677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.305698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.305862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.305878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.306168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.306186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.306404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.306425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.306615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.306637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.306754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.306772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.306928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.306945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.307132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.307150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.307293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.307307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.307462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.307480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.307641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.307655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.307816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.307833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.307906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.307920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.308103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.308122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.308290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.308308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.308480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.308499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.308608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.308623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.308714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.308729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.308893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.308912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.309074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.309087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.309242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.309255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.309420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.309435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.309525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.309540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.309627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.309641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.286 qpair failed and we were unable to recover it. 00:28:38.286 [2024-12-05 21:21:46.309783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.286 [2024-12-05 21:21:46.309799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.309988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.310004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.310154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.310171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.310411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.310432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.310535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.310557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.310749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.310763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.310863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.310878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.310960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.310972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.311067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.311079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.311286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.311306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.311460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.311478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.311620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.311638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.311797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.311812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.311900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.311914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.312175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.312193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.312429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.312444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.312531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.312543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.312644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.312656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.312815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.312834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.312923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.312938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.313035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.313050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.313271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.313287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.313441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.313459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.313710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.313730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.313898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.313915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.314146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.314160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.314310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.314324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.314479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.314497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.314607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.314622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.314786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.314801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.314966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.314983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.315139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.315156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.315372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.315392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.315502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.315517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.315677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.315691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.315866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.315880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.316121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.316141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.316300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.287 [2024-12-05 21:21:46.316315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.287 qpair failed and we were unable to recover it. 00:28:38.287 [2024-12-05 21:21:46.316529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.316547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.316702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.316718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.316900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.316919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.317029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.317046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.317190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.317203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.317292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.317304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.317467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.317486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.317655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.317673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.317841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.317858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.318019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.318035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.318219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.318236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.318491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.318516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.318756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.318773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.318930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.318944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.319195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.319213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.319447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.319468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.319683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.319700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.319961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.319984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.320240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.320257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.320343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.320355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.320617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.320633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.320750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.320767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.320908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.320925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.321086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.321101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.321351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.321385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.321618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.321638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.321870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.321886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.322136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.322151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.322311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.322329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.322520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.322540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.322761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.322777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.322924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.322939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.323095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.323112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.323332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.288 [2024-12-05 21:21:46.323354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.288 qpair failed and we were unable to recover it. 00:28:38.288 [2024-12-05 21:21:46.323591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.323611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.323866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.323888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.324080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.324099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.324257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.324272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.324485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.324500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.324721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.324737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.324972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.324988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.325229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.325246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.325429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.325448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.325591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.325609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.325774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.325788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.325935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.325948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.326096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.326115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.326377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.326398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.326561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.326578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.326790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.326807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.327048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.327068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.327254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.327268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.327421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.327435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.327624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.327641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.327812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.327829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.328013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.328029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.328194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.328210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.328323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.328337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.328462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.328479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.328712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.328729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.328907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.328921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.329084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.329098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.329246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.329263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.329526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.329547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.329728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.329744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.329913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.329930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.330031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.330046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.330186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.330202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.330438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.330454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.330688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.330703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.330862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.330880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.289 [2024-12-05 21:21:46.331063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.289 [2024-12-05 21:21:46.331080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.289 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.331219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.331234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.331421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.331439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.331675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.331694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.331934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.331951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.332120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.332133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.332359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.332383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.332548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.332565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.332729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.332744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.332884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.332900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.333067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.333084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.333243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.333259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.333407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.333423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.333582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.333595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.333848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.333864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.334093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.334118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.334278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.334294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.334505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.334523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.334748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.334766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.335040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.335056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.335193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.335206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.335455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.335476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.335696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.335712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.335973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.335990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.336152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.336170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.336255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.336268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.336491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.336506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.336735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.336751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.336929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.336947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.337150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.337166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.337401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.337419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.337635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.337654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.337908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.337925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.338162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.338175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.338258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.338272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.338485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.338506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.338735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.338752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.338938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.338954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.290 qpair failed and we were unable to recover it. 00:28:38.290 [2024-12-05 21:21:46.339126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.290 [2024-12-05 21:21:46.339144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.291 qpair failed and we were unable to recover it. 00:28:38.291 [2024-12-05 21:21:46.339390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.339413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.339537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.339554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.339791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.339815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.340077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.340095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.340192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.340207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.340437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.340453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.340684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.340701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.340953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.340969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.341113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.341129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.341287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.341302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.341558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.341581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.341817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.341831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.342064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.342081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.342233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.342250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.342393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.342409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.342570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.342585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.342722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.342741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.342949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.342968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.343196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.343212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.343441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.343457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.343712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.343731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.343871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.343886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.344078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.344094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.344326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.344347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-05 21:21:46.344593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-12-05 21:21:46.344611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.344840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.344854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.345099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.345119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.345262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.345278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.345504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.345521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.345694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.345710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.345923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.345941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.346153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.346169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.346316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.346328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.346548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.346567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.346717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.346734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.346941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.346956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.347124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.347140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.347467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.347492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.347635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.347649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.347888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.347902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.348132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.348152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.348239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.348253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.348408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.348425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.348590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.348606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.348810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.348830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.348990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.349008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.349162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.349175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.349403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.349417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.349506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.349519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.349677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.349694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.349929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.349945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.350102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.350118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.350279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.350297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.350439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.350456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.350605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.350620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.350855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.350869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.351020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.351038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.351198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.351216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.351423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.351440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.351671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.351687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.572 qpair failed and we were unable to recover it. 00:28:38.572 [2024-12-05 21:21:46.351844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-12-05 21:21:46.351862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.352101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.352117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.352323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.352337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.352531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.352550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.352786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.352803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.352911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.352926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.353016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.353029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.353178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.353194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.353409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.353429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.353535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.353550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.353629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.353640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.353856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.353869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.354006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.354022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.354110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.354124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.354275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.354290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.354496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.354513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.354753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.354773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.355041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.355063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.355224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.355241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.355486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.355512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.355747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.355765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.356026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.356041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.356221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.356235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.356379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.356397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.356571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.356586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.356676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.356690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.356846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.356863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.357004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.357020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.357177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.357194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.357295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.357307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.357464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.357479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.357724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.357742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.358021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.358039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.358197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.358213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.358455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-12-05 21:21:46.358477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.573 qpair failed and we were unable to recover it. 00:28:38.573 [2024-12-05 21:21:46.358727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.358744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.358974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.358991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.359220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.359239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.359328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.359342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.359519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.359535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.359718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.359734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.359921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.359940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.360171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.360188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.360383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.360396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.360531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.360545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.360726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.360745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.360962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.360977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.361207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.361224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.361454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.361476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.361684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.361701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.361964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.361977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.362181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.362198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.362432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.362451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.362720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.362736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.362944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.362961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.363198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.363213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.363433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.363448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.363700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.363718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.363927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.363943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.364173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.364191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.364378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.364396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.364538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.364551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.364795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.364811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.364985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.365003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.365252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.365269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.365503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.365521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.365662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.365678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.365779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.365794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.366026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.366041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.366187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.366201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.366364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.366398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.366571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.366589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.574 qpair failed and we were unable to recover it. 00:28:38.574 [2024-12-05 21:21:46.366743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-12-05 21:21:46.366758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.367002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.367018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.367324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.367343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.367606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.367621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.367810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.367831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.368041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.368057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.368139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.368152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.368409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.368428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.368585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.368603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.368775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.368790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.368948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.368960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.369095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.369111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.369273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.369292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.369535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.369553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.369760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.369775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.370001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.370018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.370199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.370219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.370383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.370402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.370625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.370647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.370880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.370898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.371053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.371066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.371319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.371334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.371595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.371613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.371761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.371777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.371927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.371943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.372115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.372134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.372346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.372361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.372520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.372534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.372723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.372737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.372963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.372981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.373149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.373164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.373346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.373362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.373530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.373549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.373757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.373773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.373940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.373953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.374100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.374114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.374282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.575 [2024-12-05 21:21:46.374299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.575 qpair failed and we were unable to recover it. 00:28:38.575 [2024-12-05 21:21:46.374559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.374578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.374807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.374822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.375052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.375071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.375220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.375233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.375434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.375448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.375699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.375717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.375876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.375891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.376108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.376129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.376352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.376374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.376480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.376494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.376720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.376736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.376900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.376914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.377136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.377153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.377379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.377396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.377546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.377562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.377778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.377796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.378032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.378049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.378278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.378291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.378516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.378535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.378687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.378702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.378928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.378944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.379025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.379039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.379259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.379279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.379451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.379468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.379642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.379655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.379818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.379832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.379985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.380002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.380144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.380159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.380363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.380384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.380560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.380576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.576 [2024-12-05 21:21:46.380656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.576 [2024-12-05 21:21:46.380669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.576 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.380821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.380837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.381075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.381089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.381242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.381254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.381409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.381427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.381593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.381608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.381777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.381793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.381944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.381960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.382120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.382138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.382388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.382407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.382564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.382577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.382831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.382849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.383005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.383021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.383262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.383278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.383433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.383450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.383657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.383674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.383761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.383774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.383856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.383871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.384078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.384091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.384240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.384256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.384489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.384508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.384658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.384673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.384833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.384849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.385031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.385049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.385263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.385284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.385539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.385563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.385794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.385814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.386070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.386087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.386235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.386250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.386451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.386467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.386714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.386730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.386890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.386906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.387172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.387192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.387309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.387324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.387554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.387570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.387821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.387836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.388070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.577 [2024-12-05 21:21:46.388088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.577 qpair failed and we were unable to recover it. 00:28:38.577 [2024-12-05 21:21:46.388314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.388330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.388589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.388610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.388703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.388717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.388884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.388899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.389098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.389111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.389249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.389264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.389408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.389425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.389743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.389795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.390018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.390053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.390239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.390271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.390464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.390500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.390765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.390797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.391032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.391063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.391264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.391297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.391502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.391535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.391775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.391808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.392101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.392133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.392315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.392348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.392566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.392599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.392861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.392894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.393106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.393138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.393418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.393452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.393620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.393647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.393806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.393823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.393997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.394013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.394240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.394254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.394354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.394394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.394624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.394642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.394880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.394898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.395134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.395155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.395308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.395324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.395469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.395483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.395564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.395575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.395660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.395670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.395923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.395943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.396103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.396118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.396307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.396324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.396547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.578 [2024-12-05 21:21:46.396569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.578 qpair failed and we were unable to recover it. 00:28:38.578 [2024-12-05 21:21:46.396746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.396762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.396995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.397008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.397201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.397219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.397451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.397470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.397642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.397659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.397847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.397863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.398093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.398113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.398391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.398405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.398560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.398576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.398672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.398692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.398843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.398858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.399014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.399029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.399261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.399278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.399461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.399479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.399689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.399707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.399870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.399890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.400128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.400148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.400335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.400352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.400569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.400584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.400736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.400751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.400938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.400954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.401115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.401130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.401272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.401287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.401384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.401399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.401562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.401580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.401728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.401745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.401995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.402008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.402185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.402199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.402355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.402387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.402619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.402635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.402805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.402821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.403054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.403072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.403279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.403294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.403558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.403575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.403756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.403773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.403876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.403891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.404053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.404069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.579 qpair failed and we were unable to recover it. 00:28:38.579 [2024-12-05 21:21:46.404163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.579 [2024-12-05 21:21:46.404177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.404409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.404431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.404610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.404626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.404831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.404844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.404919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.404930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.405060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.405076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.405325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.405343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.405602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.405620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.405702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.405716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.405857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.405873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.406076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.406092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.406246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.406260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.406459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.406479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.406615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.406633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.406785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.406800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.407007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.407023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.407241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.407258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.407487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.407505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.407715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.407729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.407995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.408012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.408164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.408180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.408331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.408346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.408520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.408537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.408702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.408720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.408874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.408890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.409044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.409057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.409212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.409225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.409435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.409454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.409684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.409700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.409936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.409952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.410107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.410124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.410328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.410344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.410603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.410617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.580 [2024-12-05 21:21:46.410893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.580 [2024-12-05 21:21:46.410911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.580 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.411063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.411078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.411305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.411320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.411559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.411578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.411788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.411802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.412015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.412029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.412261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.412279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.412532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.412550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.412720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.412735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.412965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.412983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.413069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.413080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.413232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.413245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.413397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.413413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.413561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.413579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.413814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.413829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.413941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.413956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.414104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.414119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.414327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.414344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.414591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.414611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.414775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.414800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.414976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.414995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.415161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.415179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.415386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.415402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.415556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.415569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.415703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.415718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.415922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.415937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.416024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.416037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.416136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.416150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.416285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.416300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.416454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.416474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.416707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.416723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.416947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.416960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.417225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.417243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.417426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.417442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.417675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.417692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.417873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.417890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.418031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.418045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.581 qpair failed and we were unable to recover it. 00:28:38.581 [2024-12-05 21:21:46.418268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.581 [2024-12-05 21:21:46.418282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.418494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.418512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.418610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.418625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.418866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.418881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.419061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.419076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.419256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.419272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.419432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.419449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.419652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.419666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.419804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.419818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.419990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.420010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.420112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.420127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.420353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.420373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.420610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.420627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.420863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.420882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.421020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.421032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.421185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.421197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.421286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.421298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.421570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.421590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.421753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.421769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.422011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.422029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.422178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.422194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.422418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.422434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.422636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.422654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.422880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.422898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.423172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.423188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.423338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.423354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.423612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.423632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.423778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.423791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.423956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.423968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.424183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.424201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.424300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.424315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.424472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.424488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.424729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.424746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.424884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.424900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.425037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.425053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.425257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.425270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.425491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.425508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.425662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.582 [2024-12-05 21:21:46.425679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.582 qpair failed and we were unable to recover it. 00:28:38.582 [2024-12-05 21:21:46.425769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.425782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.425987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.426003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.426233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.426252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.426394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.426410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.426666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.426681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.426908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.426924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.427160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.427178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.427395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.427412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.427637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.427656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.427741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.427754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.427847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.427861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.427958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.427971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.428112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.428125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.428272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.428286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.428434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.428454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.428626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.428641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.428731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.428745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.428975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.428991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.429218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.429237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.429390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.429409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.429627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.429648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.429828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.429848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.430103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.430120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.430268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.430281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.430442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.430460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.430592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.430606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.430834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.430850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.431022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.431038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.431202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.431217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.431457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.431479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.431719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.431732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.431881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.431895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.431967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.431980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.432120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.432135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.432342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.432358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.432555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.432571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.432796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.432813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.583 [2024-12-05 21:21:46.432923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.583 [2024-12-05 21:21:46.432937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.583 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.433172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.433185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.433383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.433402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.433570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.433586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.433722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.433737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.433918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.433933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.434077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.434094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.434267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.434282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.434433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.434446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.434589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.434601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.434772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.434788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.435040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.435056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.435242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.435258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.435479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.435497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.435587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.435600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.435854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.435871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.436031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.436045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.436192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.436206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.436459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.436479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.436719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.436735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.436946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.436964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.437220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.437237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.437417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.437431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.437565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.437579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.437832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.437850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.438026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.438041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.438117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.438131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.438395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.438420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.438586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.438602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.438756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.438769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.438833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.438844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.439001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.439016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.439150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.439167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.439383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.439400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.439538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.439554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.439709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.439724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.439952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.439971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.440110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.440123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.584 qpair failed and we were unable to recover it. 00:28:38.584 [2024-12-05 21:21:46.440327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.584 [2024-12-05 21:21:46.440342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.440575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.440595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.440753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.440768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.440929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.440944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.441047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.441062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.441289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.441307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.441547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.441562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.441779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.441793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.441999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.442016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.442165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.442179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.442431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.442450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.442683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.442701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.442911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.442925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.443090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.443104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.443288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.443306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.443517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.443534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.443744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.443760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.443994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.444013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.444172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.444192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.444345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.444362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.444612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.444636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.444874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.444891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.445096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.445111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.445338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.445354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.445542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.445560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.445785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.445800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.445900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.445915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.446139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.446157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.446391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.446409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.446593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.446610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.446808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.446825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.447061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.447077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.447263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.585 [2024-12-05 21:21:46.447278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.585 qpair failed and we were unable to recover it. 00:28:38.585 [2024-12-05 21:21:46.447505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.447526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.447709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.447725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.447865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.447878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.448128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.448145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.448232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.448246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.448394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.448411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.448666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.448682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.448944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.448963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.449173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.449188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.449279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.449291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.449544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.449563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.449796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.449814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.450033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.450049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.450131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.450144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.450379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.450399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.450557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.450573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.450803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.450816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.450949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.450963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.451116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.451133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.451288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.451302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.451559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.451577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.451838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.451857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.452094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.452109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.452259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.452272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.452492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.452511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.452612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.452626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.452781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.452796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.453024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.453041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.453268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.453286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.453537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.453553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.453719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.453736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.453892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.453910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.454108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.454123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.454364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.454395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.454600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.454617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.454768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.454782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.454990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.455008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.455231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.586 [2024-12-05 21:21:46.455249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.586 qpair failed and we were unable to recover it. 00:28:38.586 [2024-12-05 21:21:46.455409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.455427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.455577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.455592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.455846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.455864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.456032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.456049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.456254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.456268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.456500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.456517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.456744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.456762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.456940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.456957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.457198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.457214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.457371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.457389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.457567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.457582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.457729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.457742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.457916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.457931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.458180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.458196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.458413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.458430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.458685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.458705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.458870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.458890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.459057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.459074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.459282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.459305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.459521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.459540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.459823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.459842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.459999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.460013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.460165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.460180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.460338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.460352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.460517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.460534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.460658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.460706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.460900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.460933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.461140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.461172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.461443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.461478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.461763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.461795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.461933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.461964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.462146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.462178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.462443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.462476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.462685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.462717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.462956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.462983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.463103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.463119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.463322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.463341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.587 qpair failed and we were unable to recover it. 00:28:38.587 [2024-12-05 21:21:46.463614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.587 [2024-12-05 21:21:46.463632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.463839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.463854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.464087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.464105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.464279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.464295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.464506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.464525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.464767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.464787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.464946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.464962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.465166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.465180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.465335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.465349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.465518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.465539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.465684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.465697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.465856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.465871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.466140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.466160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.466389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.466407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.466637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.466650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.466795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.466811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.466898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.466912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.467167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.467183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.467284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.467299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.467551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.467573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.467773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.467790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.467958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.467971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.468114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.468129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.468327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.468346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.468456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.468472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.468631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.468646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.468850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.468866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.468983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.468999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.469202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.469224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.469386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.469400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.469604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.469619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.469784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.469802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.469960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.469975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.470176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.470193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.470406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.470428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.470665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.588 [2024-12-05 21:21:46.470682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.588 qpair failed and we were unable to recover it. 00:28:38.588 [2024-12-05 21:21:46.470912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.470926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.471080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.471097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.471251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.471268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.471492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.471510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.471610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.471625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.471761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.471776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.472012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.472030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.472214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.472227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.472326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.472338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.472591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.472611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.472841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.472857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.473012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.473029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.473185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.473202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.473458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.473481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.473711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.473731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.473947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.473969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.474114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.474130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.474372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.474391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.474478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.474489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.474589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.474601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.474784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.474801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.474963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.474978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.475126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.475141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.475361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.475384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.475615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.475631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.475769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.475782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.475923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.475934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.476150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.476170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.476402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.476419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.476593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.476609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.476782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.476800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.476975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.476991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.477221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.477239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.477396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.477412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.477630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.477648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.477806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.477821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.477996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.589 [2024-12-05 21:21:46.478011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.589 qpair failed and we were unable to recover it. 00:28:38.589 [2024-12-05 21:21:46.478171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.478190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.478287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.478302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.478456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.478472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.478566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.478577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.478739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.478752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.478838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.478852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.478945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.478963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.479110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.479126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.479358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.479379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.479478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.479493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.479643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.479658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.479756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.479772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.479862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.479878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.480050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.480064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.480284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.480296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.480495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.480514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.480688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.480702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.480871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.480885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.481045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.481060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.481279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.481296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.481457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.481472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.481563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.481574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.481727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.481741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.481887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.481903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.482064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.482079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.482292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.482307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.482514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.482534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.482630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.482645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.482848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.482864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.483011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.483023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.483206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.483222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.483331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.590 [2024-12-05 21:21:46.483346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.590 qpair failed and we were unable to recover it. 00:28:38.590 [2024-12-05 21:21:46.483544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.483560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.483669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.483684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.483836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.483850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.484091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.484113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.484225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.484238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.484484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.484498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.484645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.484660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.484885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.484900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.485134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.485151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.485311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.485328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.485518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.485536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.485694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.485707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.485796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.485807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.485901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.485913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.486062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.486078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.486227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.486243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.486399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.486417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.486642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.486658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.486862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.486880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.487148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.487164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.487253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.487264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.487459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.487476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.487704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.487719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.487820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.487834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.488020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.488036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.488184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.488200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.488357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.488380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.488493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.488509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.488613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.488629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.488837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.488859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.489107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.489124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.489377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.489395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.489621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.489637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.489792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.489808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.490016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.490031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.490132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.490148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.591 [2024-12-05 21:21:46.490411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.591 [2024-12-05 21:21:46.490433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.591 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.490594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.490610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.490833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.490846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.491061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.491078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.491163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.491176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.491379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.491396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.491604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.491620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.491703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.491720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.491950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.491969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.492065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.492078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.492241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.492255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.492501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.492521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.492729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.492745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.492900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.492915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.493156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.493175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.493410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.493428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.493567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.493579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.493807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.493824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.493918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.493932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.494114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.494129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.494356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.494386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.494595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.494613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.494715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.494728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.494831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.494846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.495007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.495020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.495168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.495180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.495349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.495366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.495515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.495530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.495622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.495635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.495769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.495784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.495940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.495955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.496105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.496121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.496301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.496317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.496541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.496555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.496731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.496748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.496925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.496941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.592 [2024-12-05 21:21:46.497179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.592 [2024-12-05 21:21:46.497194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.592 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.497351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.497370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.497604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.497623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.497763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.497776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.498007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.498022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.498116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.498132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.498271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.498287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.498426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.498443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.498617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.498632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.498840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.498857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.499129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.499147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.499298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.499316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.499420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.499433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.499568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.499584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.499693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.499708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.499919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.499934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.500176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.500192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.500362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.500386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.500547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.500562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.500795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.500808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.500985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.501001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.501232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.501248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.501480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.501499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.501656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.501673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.501852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.501869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.502107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.502121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.502297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.502312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.502519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.502537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.502744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.502760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.502991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.503009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.503276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.503298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.503471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.503489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.503719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.503742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.503835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.503850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.504078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.504096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.504303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.504317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.593 [2024-12-05 21:21:46.504496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.593 [2024-12-05 21:21:46.504513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.593 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.504731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.504747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.504900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.504916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.505074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.505089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.505243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.505260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.505484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.505500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.505659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.505671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.505833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.505849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.506059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.506076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.506214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.506229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.506386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.506404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.506559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.506575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.506783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.506800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.506890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.506901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.507131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.507146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.507396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.507420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.507640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.507656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.507883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.507899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.508111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.508130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.508296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.508309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.508528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.508544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.508625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.508639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.508793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.508809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.508967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.508982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.509184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.509200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.509432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.509452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.509723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.509740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.509893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.509906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.510155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.510174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.510358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.510390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.510626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.510642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.510815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.510833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.594 qpair failed and we were unable to recover it. 00:28:38.594 [2024-12-05 21:21:46.511089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.594 [2024-12-05 21:21:46.511105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.511318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.511331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.511487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.511505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.511711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.511728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.511957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.511973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.512147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.512164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.512396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.512415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.512563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.512576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.512717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.512729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.512809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.512822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.512986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.513004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.513217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.513232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.513337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.513352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.513515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.513532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.513750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.513770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.513929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.513942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.514168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.514182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.514320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.514337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.514556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.514573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.514792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.514807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.514987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.515005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.515090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.515104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.515331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.515346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.515505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.515524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.515663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.515678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.515849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.515865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.516045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.516060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.516212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.516227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.516448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.516468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.516702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.516717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.516969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.516984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.517187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.517204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.517411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.517429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.517667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.517683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.517893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.517913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.518143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.518161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.518324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.595 [2024-12-05 21:21:46.518345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.595 qpair failed and we were unable to recover it. 00:28:38.595 [2024-12-05 21:21:46.518543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.518561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.518641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.518657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.518819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.518834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.519060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.519074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.519235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.519250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.519318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.519331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.519576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.519594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.519805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.519821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.520033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.520051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.520276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.520290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.520440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.520456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.520640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.520658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.520795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.520809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.520948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.520964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.521111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.521125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.521351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.521373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.521530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.521545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.521761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.521775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.521980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.521998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.522160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.522174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.522391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.522408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.522588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.522604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.522752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.522767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.522923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.522937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.523099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.523111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.523339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.523357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.523515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.523536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.523695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.523710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.523805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.523819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.524051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.524068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.524298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.524315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.524419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.524432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.524590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.524603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.524812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.524830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.524928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.524943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.525196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.525211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.525311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.596 [2024-12-05 21:21:46.525324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.596 qpair failed and we were unable to recover it. 00:28:38.596 [2024-12-05 21:21:46.525421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.525436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.525510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.525523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.525725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.525742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.525837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.525848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.526043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.526055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.526255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.526272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.526421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.526439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.526667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.526682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.526861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.526877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.527039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.527055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.527216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.527230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.527436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.527450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.527663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.527681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.527823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.527838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.528059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.528075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.528296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.528316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.528478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.528495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.528631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.528644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.528866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.528880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.529037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.529054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.529311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.529328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.529558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.529575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.529725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.529742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.529903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.529917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.530064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.530077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.530220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.530234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.530410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.530429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.530596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.530611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.530825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.530841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.530995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.531014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.531167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.531183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.531419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.531436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.531705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.531721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.531866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.531882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.532032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.532047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.532276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.597 [2024-12-05 21:21:46.532293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.597 qpair failed and we were unable to recover it. 00:28:38.597 [2024-12-05 21:21:46.532508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.532528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.532711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.532731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.532968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.532988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.533216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.533237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.533450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.533468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.533539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.533550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.533681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.533693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.533847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.533860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.533955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.533968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.534190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.534207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.534393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.534411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.534589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.534604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.534685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.534698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.534902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.534918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.535179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.535192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.535288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.535300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.535492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.535511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.535734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.535750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.535981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.535996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.536226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.536244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.536426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.536440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.536611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.536625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.536875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.536893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.537126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.537142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.537312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.537327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.537531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.537551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.537819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.537833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.538061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.538078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.538178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.538193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.538437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.538455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.538683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.538698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.538948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.538966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.539173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.539187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.539411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.539434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.539653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.539670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.539821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.539836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.598 [2024-12-05 21:21:46.540022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.598 [2024-12-05 21:21:46.540037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.598 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.540263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.540281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.540552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.540569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.540715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.540730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.540958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.540977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.541155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.541169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.541334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.541350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.541528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.541547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.541800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.541817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.541997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.542011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.542157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.542171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.542408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.542428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.542681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.542697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.542853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.542868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.543098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.543117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.543292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.543305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.543573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.543593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.543700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.543716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.543958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.543974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.544114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.544129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.544221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.544235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.544480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.544501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.544708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.544721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.544819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.544831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.544984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.545004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.545158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.545174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.545407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.545424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.545604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.545620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.545849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.545869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.546112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.546126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.546275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.599 [2024-12-05 21:21:46.546290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.599 qpair failed and we were unable to recover it. 00:28:38.599 [2024-12-05 21:21:46.546427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.546445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.546698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.546714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.546944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.546960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.547134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.547152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.547383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.547403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.547616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.547638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.547789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.547807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.547965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.547981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.548061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.548076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.548211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.548225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.548447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.548463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.548564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.548579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.548747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.548762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.548964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.548979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.549175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.549192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.549342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.549357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.549524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.549539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.549715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.549728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.549964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.549982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.550145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.550161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.550338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.550353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.550630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.550672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.550875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.550907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.551040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.551073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.551337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.551379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.551630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.551662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.551913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.551944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.552202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.552234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.552422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.552455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.552715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.552747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.552939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.552970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.553230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.553262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.553457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.553490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.553774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.553805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.554041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.554057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.554234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.554250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.600 [2024-12-05 21:21:46.554406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.600 [2024-12-05 21:21:46.554425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.600 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.554651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.554667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.554837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.554850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.555001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.555018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.555123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.555138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.555341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.555356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.555612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.555630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.555789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.555806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.555962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.555977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.556152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.556165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.556360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.556382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.556647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.556664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.556901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.556917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.557151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.557170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.557407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.557424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.557591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.557603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.557693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.557705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.557929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.557947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.558087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.558102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.558320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.558335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.558561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.558580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.558735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.558750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.558930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.558943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.559051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.559064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.559163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.559178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.559379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.559397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.559538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.559553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.559729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.559744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.559887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.559904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.560138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.560154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.560385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.560399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.560604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.560622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.560856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.560871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.561091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.561105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.561309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.561327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.561535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.561555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.601 [2024-12-05 21:21:46.561839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.601 [2024-12-05 21:21:46.561862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.601 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.562073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.562094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.562327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.562345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.562521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.562535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.562671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.562686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.562900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.562916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.563123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.563139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.563370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.563390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.563651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.563668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.563823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.563835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.564062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.564080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.564272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.564287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.564425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.564442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.564675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.564695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.564867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.564882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.565039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.565053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.565276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.565291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.565515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.565534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.565687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.565702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.565849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.565864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.566088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.566107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.566260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.566275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.566361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.566377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.566548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.566561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.566709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.566725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.566929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.566948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.567123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.567138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.567301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.567316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.567473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.567490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.567573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.567586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.567669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.567682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.567904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.567919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.568142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.568156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.568418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.568437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.568709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.568725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.568866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.568882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.569113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.569130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.602 qpair failed and we were unable to recover it. 00:28:38.602 [2024-12-05 21:21:46.569393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.602 [2024-12-05 21:21:46.569407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.569632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.569650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.569885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.569900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.570050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.570066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.570293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.570317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.570526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.570544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.570779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.570792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.570963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.570979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.571183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.571198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.571423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.571441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.571672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.571691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.571835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.571850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.572003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.572016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.572216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.572230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.572462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.572481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.572586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.572601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.572819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.572835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.573009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.573027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.573214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.573231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.573431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.573445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.573593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.573607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.573781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.573799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.573940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.573954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.574057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.574072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.574240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.574255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.574484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.574504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.574666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.574681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.574835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.574848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.574986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.575000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.575204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.575221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.575429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.575445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.575533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.575546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.575630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.575643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.575870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.575889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.576097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.576117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.576379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.576402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.603 [2024-12-05 21:21:46.576572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.603 [2024-12-05 21:21:46.576591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.603 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.576823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.576840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.577001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.577015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.577235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.577251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.577404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.577421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.577577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.577592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.577796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.577812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.577972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.577989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.578201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.578221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.578426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.578440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.578647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.578664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.578825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.578840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.579078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.579094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.579270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.579286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.579436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.579452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.579600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.579613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.579756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.579768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.579911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.579926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.580187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.580204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.580314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.580329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.580476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.580492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.580723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.580742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.580974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.580989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.581169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.581182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.581432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.581452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.581682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.581698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.581931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.581948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.582222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.582241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.582494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.582509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.582613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.582626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.582762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.582777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.582929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.582945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.583094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.604 [2024-12-05 21:21:46.583109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.604 qpair failed and we were unable to recover it. 00:28:38.604 [2024-12-05 21:21:46.583265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.583280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.583433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.583450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.583613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.583629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.583776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.583790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.584036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.584051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.584282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.584300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.584511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.584528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.584763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.584779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.584987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.585004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.585265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.585279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.585451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.585466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.585693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.585710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.585917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.585933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.586043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.586058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.586307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.586326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.586572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.586592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.586822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.586838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.587003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.587019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.587230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.587246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.587427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.587443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.587674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.587691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.587927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.587941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.588085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.588100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.588327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.588345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.588533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.588549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.588727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.588742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.588896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.588914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.589144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.589161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.589334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.589347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.589580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.589599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.589897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.589915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.590151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.590167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.590397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.590417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.590578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.590596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.590825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.590846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.591077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.591096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.591193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.591209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.605 [2024-12-05 21:21:46.591427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.605 [2024-12-05 21:21:46.591444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.605 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.591515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.591526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.591662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.591676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.591770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.591784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.592036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.592052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.592285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.592301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.592382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.592396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.592611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.592629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.592788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.592802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.593001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.593016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.593253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.593271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.593545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.593563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.593775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.593794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.593953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.593968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.594191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.594204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.594268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.594279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.594428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.594446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.594615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.594630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.594794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.594813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.594962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.594977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.595154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.595172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.595408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.595426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.595586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.595599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.595800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.595815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.595978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.595996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.596163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.596177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.596313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.596328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.596561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.596578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.596732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.596748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.596898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.596913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.597134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.597147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.597356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.597386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.597558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.597573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.597809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.597825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.598033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.598052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.598263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.598279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.598506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.598521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.598687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.606 [2024-12-05 21:21:46.598703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.606 qpair failed and we were unable to recover it. 00:28:38.606 [2024-12-05 21:21:46.598913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.598930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.599137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.599152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.599388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.599409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.599676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.599693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.599898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.599912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.600117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.600134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.600341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.600357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.600524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.600540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.600743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.600761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.600915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.600931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.601097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.601109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.601307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.601322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.601536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.601555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.601786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.601801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.602061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.602077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.602302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.602320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.602557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.602572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.602674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.602687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.602857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.602874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.602957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.602971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.603177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.603197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.603345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.603360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.603576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.603595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.603825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.603841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.604075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.604089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.604338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.604355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.604567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.604584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.604817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.604835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.605057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.605078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.605239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.605256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.605483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.605507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.605703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.605720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.605944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.605962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.606210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.606225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.606436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.606454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.606683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.606697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.606965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.606985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.607146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.607 [2024-12-05 21:21:46.607162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.607 qpair failed and we were unable to recover it. 00:28:38.607 [2024-12-05 21:21:46.607392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.607406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.607549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.607562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.607700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.607717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.607953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.607969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.608176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.608192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.608405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.608424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.608584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.608598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.608842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.608857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.608991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.609007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.609246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.609282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.609523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.609557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.609736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.609768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.609978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.610009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.610269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.610301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.610554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.610587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.610887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.610914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.611020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.611035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.611278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.611293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.611520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.611538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.611760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.611776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.611936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.611952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.612181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.612196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.612353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.612379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.612479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.612493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.612696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.612709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.612811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.612825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.612911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.612925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.613151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.613168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.613422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.613440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.613668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.613688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.613918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.613934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.614087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.614100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.614323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.614341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.614580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.614598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.614764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.608 [2024-12-05 21:21:46.614779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.608 qpair failed and we were unable to recover it. 00:28:38.608 [2024-12-05 21:21:46.614982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.615000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.615155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.615170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.615320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.615333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.615561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.615577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.615680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.615696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.615951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.615968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.616114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.616129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.616332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.616348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.616517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.616535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.616740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.616756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.616951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.616967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.617144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.617164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.617378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.617395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.617655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.617671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.617855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.617873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.618037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.618051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.618256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.618269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.618497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.618517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.618679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.618694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.618837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.618854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.619060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.619075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.619307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.619329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.619557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.619576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.619719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.619738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.619898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.619917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.620171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.620189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.620258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.620270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.620419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.620438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.620663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.620680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.620838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.620853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.621103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.621119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.621259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.621276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.621556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.621575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.621809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.621824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.621898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.621911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.609 qpair failed and we were unable to recover it. 00:28:38.609 [2024-12-05 21:21:46.622056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.609 [2024-12-05 21:21:46.622073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.622278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.622293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.622447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.622464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.622716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.622735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.622900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.622916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.623100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.623113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.623291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.623308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.623548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.623566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.623723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.623738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.623914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.623930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.624165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.624184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.624428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.624443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.624668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.624685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.624890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.624906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.625084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.625099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.625327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.625342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.625581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.625601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.625810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.625824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.626075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.626092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.626191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.626205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.626467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.626486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.626731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.626748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.626905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.626920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.627173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.627188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.627451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.627470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.627623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.627639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.627789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.627805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.628009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.628024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.628123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.628138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.628339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.628356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.628596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.628609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.628710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.628722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.628874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.628897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.629055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.629070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.629275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.629291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.629444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.629462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.629624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.629641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.610 [2024-12-05 21:21:46.629898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.610 [2024-12-05 21:21:46.629915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.610 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.630165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.630181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.630341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.630357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.630596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.630613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.630897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.630917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.631170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.631186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.631415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.631430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.631581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.631597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.631751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.631767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.631987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.632003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.632105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.632119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.632322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.632342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.632604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.632623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.632797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.632811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.633047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.633065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.633224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.633240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.633338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.633351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.633458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.633474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.633704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.633723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.633936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.633956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.634139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.634155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.634389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.634414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.634563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.634580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.634789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.634809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.635034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.635049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.635215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.635230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.635371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.635387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.635614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.635630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.635784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.635800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.635975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.635992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.636161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.636175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.636309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.636321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.636486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.636503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.636705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.636722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.636868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.636883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.637110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.637131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.637380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.637401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.637625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.611 [2024-12-05 21:21:46.637638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.611 qpair failed and we were unable to recover it. 00:28:38.611 [2024-12-05 21:21:46.637784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.637799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.638026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.638044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.638223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.638240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.638494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.638514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.638689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.638706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.638909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.638923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.639077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.639091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.639310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.639328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.639524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.639540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.639696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.639712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.639943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.639960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.640041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.640055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.640255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.640271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.640449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.640463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.640647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.640663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.640819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.640835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.640982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.640997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.641133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.641148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.641287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.641302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.641527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.641547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.641756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.641771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.641994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.642010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.642242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.642259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.642411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.642429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.642663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.642678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.642768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.642784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.642881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.642895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.643122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.643138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.643280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.643295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.643503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.643522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.643696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.643712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.643936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.643953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.644158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.644178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.644449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.644469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.644701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.644716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.644941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.644959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.645214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.612 [2024-12-05 21:21:46.645230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.612 qpair failed and we were unable to recover it. 00:28:38.612 [2024-12-05 21:21:46.645486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.645512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.645722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.645739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.645893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.645908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.646118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.646135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.646344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.646361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.646645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.646664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.646805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.646821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.647076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.647093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.647319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.647332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.647582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.647602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.647760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.647776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.647942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.647958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.648161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.648179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.648379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.648398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.648611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.648628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.648721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.648736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.648997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.649018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.649219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.649236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.649442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.649458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.649658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.649673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.649841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.649856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.650012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.650027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.650183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.650198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.650384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.650403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.650634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.650650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.650805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.650818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.650992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.651007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.651164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.651180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.651394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.651411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.651597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.651611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.651707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.651722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.651882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.651898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.652076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.613 [2024-12-05 21:21:46.652089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.613 qpair failed and we were unable to recover it. 00:28:38.613 [2024-12-05 21:21:46.652255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.652269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.652471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.652491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.652631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.652645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.652717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.652730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.652952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.652969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.653061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.653074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.653229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.653246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.653421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.653435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.653596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.653609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.653766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.653782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.653938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.653955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.654160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.654175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.654283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.654297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.654450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.654468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.654672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.654690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.654834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.654849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.655048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.655063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.655218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.655234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.655388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.655406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.655587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.655602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.655809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.655825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.656038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.656056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.614 [2024-12-05 21:21:46.656337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.614 [2024-12-05 21:21:46.656352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.614 qpair failed and we were unable to recover it. 00:28:38.897 [2024-12-05 21:21:46.656603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.897 [2024-12-05 21:21:46.656622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.897 qpair failed and we were unable to recover it. 00:28:38.897 [2024-12-05 21:21:46.656785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.897 [2024-12-05 21:21:46.656801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.897 qpair failed and we were unable to recover it. 00:28:38.897 [2024-12-05 21:21:46.656958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.897 [2024-12-05 21:21:46.656973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.897 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.657120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.657135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.657282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.657298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.657478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.657496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.657725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.657739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.657945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.657961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.658060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.658076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.658171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.658186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.658408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.658426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.658582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.658602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.658695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.658709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.658787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.658801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.659035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.659053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.659134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.659144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.659282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.659295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.659544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.659564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.659724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.659741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.659969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.659986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.660220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.660240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.660451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.660465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.660723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.660740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.660997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.661014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.661168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.661184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.661343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.661358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.661512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.661529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.661756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.661771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.662028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.662044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.662247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.662264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.662337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.662350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.662565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.662583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.662757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.662773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.663003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.663025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.663187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.663204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.663415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.663440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.663657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.663675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.663904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.663921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.664079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.898 [2024-12-05 21:21:46.664093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.898 qpair failed and we were unable to recover it. 00:28:38.898 [2024-12-05 21:21:46.664273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.664288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.664546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.664563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.664754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.664771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.665001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.665019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.665176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.665190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.665416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.665431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.665658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.665676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.665854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.665869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.666103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.666119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.666349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.666371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.666581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.666596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.666808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.666823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.666985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.667007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.667241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.667256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.667396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.667413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.667659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.667678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.667901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.667917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.668147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.668162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.668379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.668399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.668511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.668526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.668688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.668704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.668908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.668923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.669071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.669088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.669295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.669311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.669533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.669548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.669693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.669709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.669965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.669982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.670123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.670138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.670386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.670406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.670664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.670681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.670887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.670900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.671036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.671052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.671209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.671225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.671376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.671394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.671543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.671560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.671719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.671734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.671889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.899 [2024-12-05 21:21:46.671906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.899 qpair failed and we were unable to recover it. 00:28:38.899 [2024-12-05 21:21:46.672141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.672158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.672358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.672378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.672645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.672663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.672886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.672904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.673042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.673058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.673209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.673226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.673297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.673311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.673521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.673536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.673705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.673720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.673928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.673946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.674099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.674115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.674260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.674274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.674455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.674473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.674696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.674714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.674959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.674972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.675174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.675195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.675408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.675426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.675582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.675598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.675824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.675839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.675975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.675991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.676234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.676249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.676455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.676470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.676699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.676717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.676930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.676946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.677129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.677145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.677303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.677320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.677462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.677481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.677697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.677715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.677945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.677967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.678234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.678251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.678410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.678426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.678590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.678604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.678812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.678828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.679032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.679048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.679237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.679253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.679401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.679419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.679630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.679646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.679861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.900 [2024-12-05 21:21:46.679874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.900 qpair failed and we were unable to recover it. 00:28:38.900 [2024-12-05 21:21:46.680031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.680047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.680148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.680163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.680346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.680361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.680595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.680612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.680771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.680788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.681023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.681039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.681187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.681200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.681350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.681366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.681593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.681610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.681748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.681763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.681996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.682013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.682221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.682241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.682415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.682430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.682632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.682647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.682800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.682818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.682925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.682941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.683098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.683113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.683379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.683402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.683494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.683507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.683685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.683702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.683926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.683939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.684166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.684184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.684345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.684360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.684611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.684628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.684859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.684878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.685031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.685047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.685263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.685276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.685505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.685523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.685666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.685683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.685947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.685962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.686102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.686117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.686279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.686296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.686522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.686539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.686693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.686706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.686800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.686813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.687029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.687047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.687211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.687227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.901 [2024-12-05 21:21:46.687480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.901 [2024-12-05 21:21:46.687500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.901 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.687722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.687740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.687918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.687932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.688133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.688148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.688351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.688373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.688526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.688541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.688781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.688797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.689006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.689024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.689265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.689280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.689481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.689496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.689633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.689649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.689858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.689873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.690080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.690096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.690303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.690321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.690527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.690544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.690643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.690656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.690720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.690731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.690952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.690970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.691148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.691163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.691349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.691365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.691446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.691463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.691692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.691710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.691943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.691962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.692173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.692194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.692364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.692388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.692583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.692600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.692837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.692851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.693026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.693040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.693195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.693210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.693372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.693388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.693490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.693505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.693757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.693777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.693922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.902 [2024-12-05 21:21:46.693938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.902 qpair failed and we were unable to recover it. 00:28:38.902 [2024-12-05 21:21:46.694109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.694122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.694332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.694346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.694579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.694599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.694834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.694851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.695056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.695072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.695153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.695166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.695317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.695333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.695537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.695551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.695793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.695811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.695953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.695968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.696197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.696212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.696442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.696460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.696635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.696651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.696803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.696817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.696913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.696924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.697127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.697144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.697320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.697336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.697508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.697525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.697748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.697764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.697912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.697929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.698069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.698085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.698223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.698236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.698438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.698454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.698687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.698705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.698869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.698884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.699056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.699071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.699245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.699262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.699349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.699372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.699457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.699471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.699682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.699696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.699769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.699780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.699919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.699934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.700107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.700124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.700351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.700372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.700543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.700558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.700712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.700729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.700966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.700982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.701072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.903 [2024-12-05 21:21:46.701083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.903 qpair failed and we were unable to recover it. 00:28:38.903 [2024-12-05 21:21:46.701330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.701347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.701587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.701605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.701691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.701705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.701948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.701964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.702196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.702215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.702456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.702470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.702626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.702641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.702830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.702847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.703079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.703094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.703244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.703259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.703476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.703496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.703651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.703665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.703729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.703740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.703952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.703967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.704141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.704159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.704361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.704381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.704591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.704608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.704834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.704852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.705040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.705054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.705139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.705150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.705355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.705376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.705561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.705577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.705719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.705733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.705965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.705981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.706055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.706068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.706269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.706288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.706496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.706513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.706746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.706768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.707006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.707024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.707257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.707276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.707422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.707437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.707587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.707603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.707842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.707857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.708022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.708037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.708257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.708276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.708519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.708536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.708713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.708727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.904 [2024-12-05 21:21:46.708902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.904 [2024-12-05 21:21:46.708919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.904 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.709141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.709157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.709311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.709327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.709476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.709493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.709727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.709746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.709906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.709919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.710155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.710171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.710406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.710426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.710683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.710700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.710932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.710951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.711109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.711124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.711350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.711362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.711525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.711542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.711641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.711656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.711877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.711892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.712097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.712113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.712298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.712316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.712522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.712539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.712767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.712781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.712945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.712962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.713167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.713184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.713391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.713408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.713660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.713679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.713826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.713842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.714046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.714060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.714282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.714299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.714540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.714559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.714733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.714749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.714981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.714998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.715179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.715196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.715290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.715301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.715434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.715448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.715673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.715695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.715911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.715927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.716077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.716093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.716298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.716313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.716405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.716420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.716562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.716578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.905 [2024-12-05 21:21:46.716757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.905 [2024-12-05 21:21:46.716770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.905 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.717069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.717087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.717253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.717269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.717502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.717519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.717723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.717739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.717898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.717913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.718063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.718076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.718257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.718270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.718474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.718494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.718750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.718766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.718854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.718868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.718947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.718960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.719134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.719151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.719388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.719407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.719645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.719659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.719744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.719757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.719962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.719979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.720133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.720149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.720378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.720396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.720558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.720575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.720785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.720805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.720978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.720996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.721178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.721199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.721438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.721456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.721624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.721639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.721722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.721734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.721871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.721885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.722060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.722076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.722248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.722263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.722476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.722494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.722649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.722666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.722874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.722890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.723108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.723121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.723294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.723309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.723460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.723481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.723639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.723655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.723908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.723925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.724084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.724101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.906 [2024-12-05 21:21:46.724336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.906 [2024-12-05 21:21:46.724353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.906 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.724467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.724480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.724571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.724582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.724795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.724812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.725017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.725033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.725263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.725278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.725550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.725571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.725808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.725822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.725911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.725924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.726012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.726026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.726194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.726212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.726362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.726383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.726555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.726571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.726710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.726724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.726980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.726999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.727260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.727273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.727364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.727384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.727611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.727629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.727864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.727880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.728035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.728050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.728227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.728244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.728459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.728475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.728729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.728744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.728952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.728970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.729122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.729137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.729306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.729322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.729573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.729592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.729847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.729864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.730109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.730125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.730337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.730354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.907 [2024-12-05 21:21:46.730527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.907 [2024-12-05 21:21:46.730544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.907 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.730681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.730696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.730948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.730966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.731204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.731220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.731441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.731458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.731666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.731683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.731910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.731929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.732201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.732221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.732454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.732471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.732739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.732754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.732920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.732937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.733209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.733224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.733443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.733461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.733563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.733578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.733725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.733740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.733943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.733956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.734126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.734143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.734351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.734382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.734558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.734574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.734663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.734677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.734865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.734882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.735089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.735108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.735269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.735285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.735511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.735535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.735799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.735817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.735901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.735914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.736148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.736162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.736389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.736407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.736479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.736492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.736639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.736654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.736858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.736874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.737037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.737054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.737142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.737156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.737323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.737360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.737645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.737677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.737948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.737980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.738266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.738297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.738569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.738602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.738726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.908 [2024-12-05 21:21:46.738757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.908 qpair failed and we were unable to recover it. 00:28:38.908 [2024-12-05 21:21:46.738998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.739028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.739239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.739271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.739510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.739543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.739806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.739830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.739996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.740012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.740115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.740131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.740278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.740292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.740511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.740533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.740766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.740785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.740888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.740903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.740986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.740999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.741220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.741233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.741392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.741408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.741613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.741629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.741728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.741742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.741888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.741903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.742133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.742151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.742309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.742325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.742527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.742542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.742740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.742756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.742927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.742944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.743178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.743195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.743446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.743465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.743620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.743636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.743801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.743815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.744017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.744032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.744187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.744203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.744439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.744458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.744664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.744680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.744936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.744955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.745207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.745222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.745448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.745466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.745699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.745715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.745970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.745987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.746261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.746308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.746564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.746599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.746867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.746900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.747180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.909 [2024-12-05 21:21:46.747212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.909 qpair failed and we were unable to recover it. 00:28:38.909 [2024-12-05 21:21:46.747491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.747524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.747765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.747797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.748087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.748119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.748301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.748332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.748629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.748662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.748873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.748905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.749108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.749139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.749329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.749361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.749549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.749575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.749730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.749745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.749854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.749869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.750037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.750052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.750276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.750291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.750433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.750453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.750660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.750678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.750899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.750913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.751063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.751079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.751291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.751308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.751526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.751545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.751768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.751787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.751966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.751983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.752187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.752200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.752292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.752305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.752532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.752554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.752791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.752806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.752974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.752989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.753147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.753165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.753324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.753339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.753476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.753489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.753693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.753707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.753862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.753879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.754037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.754051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.754205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.754221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.754378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.754395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.754574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.754592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.754763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.754779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.754932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.754948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.755119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.910 [2024-12-05 21:21:46.755134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.910 qpair failed and we were unable to recover it. 00:28:38.910 [2024-12-05 21:21:46.755338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.755354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.755630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.755648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.755912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.755931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.756165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.756181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.756335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.756348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.756583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.756603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.756704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.756718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.756929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.756944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.757100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.757115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.757259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.757276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.757510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.757528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.757676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.757689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.757773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.757786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.757941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.757958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.758108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.758123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.758298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.758313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.758452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.758468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.758650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.758668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.758818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.758833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.759044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.759057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.759174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.759188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.759334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.759351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.759594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.759612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.759855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.759871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.760106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.760125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.760280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.760294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.760511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.760526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.760630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.760645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.760898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.760915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.761121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.761137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.761410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.761431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.761503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.761515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.761744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.761758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.911 [2024-12-05 21:21:46.761930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.911 [2024-12-05 21:21:46.761947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.911 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.762107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.762123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.762341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.762356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.762614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.762635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.762871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.762893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.763083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.763105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.763374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.763398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.763637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.763654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.763881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.763896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.764075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.764091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.764315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.764330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.764579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.764597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.764803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.764821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.765050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.765065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.765218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.765232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.765456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.765476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.765579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.765593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.765735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.765750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.765850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.765864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.766078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.766096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.766270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.766285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.766460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.766474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.766703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.766720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.766952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.766968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.767197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.767214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.767384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.767402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.767629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.767646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.767850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.767863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.768007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.768022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.768173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.768190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.768301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.768315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.768460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.768484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.768692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.768708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.768913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.768930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.769142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.769157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.769360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.769380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.769598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.769617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.769766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.769781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.912 qpair failed and we were unable to recover it. 00:28:38.912 [2024-12-05 21:21:46.769985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.912 [2024-12-05 21:21:46.770000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.770225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.770243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.770520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.770538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.770796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.770812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.771052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.771069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.771286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.771303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.771404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.771419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.771646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.771669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.771820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.771833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.772053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.772067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.772304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.772322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.772556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.772572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.772747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.772763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.772993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.773012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.773244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.773257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.773460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.773478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.773692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.773708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.773917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.773933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.774030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.774044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.774274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.774291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.774454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.774469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.774723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.774737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.774993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.775010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.775168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.775183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.775329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.775345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.775605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.775624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.775875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.775890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.776027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.776041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.776265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.776283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.776487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.776504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.776723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.776739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.776918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.776935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.777143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.777162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.777308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.777326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.777561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.777583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.777772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.777788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.778010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.778024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.913 [2024-12-05 21:21:46.778273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.913 [2024-12-05 21:21:46.778289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.913 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.778545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.778563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.778648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.778661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.778808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.778823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.778970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.778986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.779142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.779157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.779407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.779422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.779650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.779668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.779852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.779868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.780106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.780122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.780262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.780283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.780526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.780544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.780771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.780785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.780954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.780971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.781141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.781157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.781310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.781326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.781574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.781591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.781774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.781790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.781894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.781907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.782128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.782141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.782373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.782393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.782642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.782658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.782810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.782825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.783052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.783071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.783181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.783196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.783352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.783365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.783603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.783618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.783819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.783836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.784005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.784020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.784226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.784242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.784468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.784487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.784644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.784659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.784898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.784913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.785083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.785100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.785308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.785324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.785488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.785505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.785709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.785727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.914 [2024-12-05 21:21:46.785953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.914 [2024-12-05 21:21:46.785970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.914 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.786130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.786143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.786242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.786255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.786426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.786445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.786663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.786679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.786828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.786843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.787092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.787112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.787292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.787308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.787512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.787526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.787681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.787695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.787843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.787861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.787960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.787975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.788178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.788194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.788343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.788358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.788606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.788627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.788784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.788797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.789070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.789087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.789334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.789350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.789560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.789578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.789785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.789802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.790033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.790050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.790280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.790294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.790536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.790556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.790707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.790722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.790971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.790988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.791196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.791214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.791494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.791515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.791733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.791755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.791992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.792010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.792173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.792188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.792333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.792346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.792487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.792502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.792703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.792719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.792868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.792883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.793093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.793108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.793331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.793348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.793572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.793587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.793734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.793748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.793952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.915 [2024-12-05 21:21:46.793969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.915 qpair failed and we were unable to recover it. 00:28:38.915 [2024-12-05 21:21:46.794217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.794232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.794433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.794454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.794684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.794703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.794817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.794829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.795060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.795075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.795253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.795271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.795442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.795459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.795686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.795702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.795956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.795975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.796185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.796201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.796356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.796373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.796551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.796568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.796660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.796674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.796927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.796942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.797033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.797046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.797190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.797206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.797354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.797383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.797639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.797656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.797820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.797832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.797901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.797913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.798151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.798168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.798411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.798429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.798657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.798675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.798883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.798899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.799055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.799067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.799210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.799224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.799399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.799419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.799515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.799528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.799821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.799837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.800053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.800073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.800280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.800296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.800445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.800459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.800604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.916 [2024-12-05 21:21:46.800618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.916 qpair failed and we were unable to recover it. 00:28:38.916 [2024-12-05 21:21:46.800820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.800838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.800943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.800957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.801108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.801124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.801350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.801371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.801604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.801621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.801803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.801815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.802014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.802031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.802286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.802303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.802455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.802476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.802680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.802696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.802855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.802871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.803020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.803034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.803181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.803193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.803390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.803408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.803513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.803529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.803730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.803745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.803880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.803895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.804049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.804064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.804308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.804326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.804409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.804421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.804570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.804582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.804811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.804829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.804989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.805004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.805235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.805251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.805388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.805404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.805636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.805656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.805756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.805773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.805914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.805930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.806087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.806108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.806269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.806285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.806443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.806461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.806707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.806722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.806900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.806916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.807142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.807158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.807387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.807406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.807589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.807607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.807814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.807830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.808032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.917 [2024-12-05 21:21:46.808046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.917 qpair failed and we were unable to recover it. 00:28:38.917 [2024-12-05 21:21:46.808189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.808204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.808436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.808455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.808713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.808730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.808949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.808967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.809204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.809220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.809447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.809463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.809676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.809695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.809906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.809922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.810094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.810109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.810276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.810293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.810526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.810547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.810756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.810771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.811030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.811048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.811207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.811221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.811449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.811467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.811685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.811702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.811938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.811953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.812210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.812227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.812379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.812396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.812586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.812602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.812755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.812771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.812928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.812945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.813043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.813056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.813208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.813223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.813384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.813398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.813641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.813658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.813890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.813907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.814139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.814155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.814382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.814402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.814580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.814594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.814795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.814808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.815030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.815048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.815209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.918 [2024-12-05 21:21:46.815224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.918 qpair failed and we were unable to recover it. 00:28:38.918 [2024-12-05 21:21:46.815378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.815394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.815652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.815672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.815835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.815851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.816021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.816034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.816266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.816283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.816509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.816527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.816683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.816699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.816920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.816936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.817167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.817184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.817330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.817343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.817484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.817497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.817724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.817742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.817978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.817993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.818236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.818251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.818456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.818476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.818734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.818748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.818840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.818853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.819043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.819066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.819220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.819235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.819391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.819407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.819591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.819607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.819817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.819835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.820060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.820079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.820319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.820343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.820504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.820522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.820693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.820709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.820851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.820864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.919 qpair failed and we were unable to recover it. 00:28:38.919 [2024-12-05 21:21:46.820994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.919 [2024-12-05 21:21:46.821007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.821230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.821246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.821399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.821416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.821645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.821660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.821821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.821838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.822018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.822034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.822183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.822195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.822378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.822394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.822483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.822497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.822676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.822691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.822894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.822910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.823135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.823155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.823399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.823417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.823647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.823662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.823917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.823936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.824163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.824179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.824351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.824371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.824533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.824550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.824785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.824801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.824902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.824913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.824990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.825003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.825260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.825278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.825427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.825443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.825647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.825663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.825757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.825770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.826014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.826033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.826267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.920 [2024-12-05 21:21:46.826281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.920 qpair failed and we were unable to recover it. 00:28:38.920 [2024-12-05 21:21:46.826441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.826457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.826697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.826714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.826945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.826961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.827061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.827079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.827229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.827246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.827346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.827361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.827600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.827613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.827781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.827796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.827990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.828007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.828210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.828226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.828483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.828502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.828707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.828725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.828863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.828876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.829099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.829114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.829357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.829379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.829618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.829634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.829785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.829799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.829967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.829984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.830150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.830163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.830333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.830347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.830484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.830501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.830639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.830655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.830906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.830921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.831152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.921 [2024-12-05 21:21:46.831170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.921 qpair failed and we were unable to recover it. 00:28:38.921 [2024-12-05 21:21:46.831399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.831416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.831567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.831580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.831777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.831791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.831964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.831980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.832193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.832209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.832412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.832429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.832516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.832530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.832715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.832732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.832932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.832945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.833149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.833165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.833314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.833330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.833475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.833491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.833744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.833759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.833933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.833950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.834202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.834223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.834452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.834476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.834709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.834728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.834985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.835003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.835251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.835268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.835427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.835448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.835677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.835692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.835842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.835857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.836084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.836102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.836282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.836295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.836391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.836403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.836629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.836647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.836873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.836889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.837051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.837067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.837245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.837261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.837415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.837432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.837668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.837684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.837832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.922 [2024-12-05 21:21:46.837847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.922 qpair failed and we were unable to recover it. 00:28:38.922 [2024-12-05 21:21:46.837980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.837996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.838224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.838240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.838397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.838414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.838644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.838662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.838820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.838836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.839061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.839075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.839321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.839337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.839588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.839608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.839818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.839834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.840010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.840025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.840271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.840289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.840471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.840486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.840718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.840735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.840909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.840925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.841135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.841151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.841318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.841334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.841522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.841540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.841804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.841820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.841989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.842004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.842225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.842242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.842486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.842503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.842600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.842617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.842762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.842778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.842915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.842931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.843101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.843114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.843200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.843211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.843435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.843453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.843707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.843727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.843955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.923 [2024-12-05 21:21:46.843970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.923 qpair failed and we were unable to recover it. 00:28:38.923 [2024-12-05 21:21:46.844107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.844124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.844273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.844290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.844539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.844553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.844749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.844766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.844976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.844991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.845166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.845182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.845363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.845384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.845543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.845559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.845706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.845720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.845922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.845935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.846165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.846182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.846341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.846356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.846571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.846588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.846792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.846811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.847039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.847056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.847224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.847237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.847440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.847458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.847617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.847633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.847848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.847864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.848046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.848062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.848242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.848259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.848444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.848464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.848614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.848632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.848785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.848804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.849036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.849053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.849199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.849216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.849382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.849397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.849650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.924 [2024-12-05 21:21:46.849667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.924 qpair failed and we were unable to recover it. 00:28:38.924 [2024-12-05 21:21:46.849892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.849908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.850063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.850079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.850226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.850242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1470104 Killed "${NVMF_APP[@]}" "$@" 00:28:38.925 [2024-12-05 21:21:46.850408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.850425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.850598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.850613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.850694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.850705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.850864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.850877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 21:21:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:38.925 [2024-12-05 21:21:46.851064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.851082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.851269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.851284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 21:21:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:38.925 [2024-12-05 21:21:46.851441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.851459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.851688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.851708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 21:21:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:38.925 [2024-12-05 21:21:46.851827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.851842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 21:21:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:38.925 [2024-12-05 21:21:46.852064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.852079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.852231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.852246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 21:21:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.925 [2024-12-05 21:21:46.852404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.852423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.852676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.852692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.852923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.852941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.853150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.853167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.853402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.853417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.853570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.853585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.853735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.853752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.853936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.853978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.854246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.854278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.854464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.854497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.854760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.854792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.854928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.854960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.855198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.855232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.855493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.925 [2024-12-05 21:21:46.855520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.925 qpair failed and we were unable to recover it. 00:28:38.925 [2024-12-05 21:21:46.855668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.855685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.855832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.855846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.855926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.855939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.856085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.856102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.856280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.856298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.856564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.856585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.856690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.856709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.856928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.856947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.857186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.857202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.857419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.857437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.857643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.857660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.857866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.857882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.858117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.858136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.858295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.858312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.858490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.858504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.858657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.858671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.858873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.858891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.859072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.859087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.859255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.859270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.859440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.859460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 21:21:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1470870 00:28:38.926 [2024-12-05 21:21:46.859695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.859714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.859894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.859908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 21:21:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1470870 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 21:21:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:38.926 [2024-12-05 21:21:46.860088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.860104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.860308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.860327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 21:21:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1470870 ']' 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.860424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.860439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.860599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.860616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 21:21:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.926 [2024-12-05 21:21:46.860821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.860838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 21:21:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:38.926 [2024-12-05 21:21:46.860988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.861005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.926 qpair failed and we were unable to recover it. 00:28:38.926 [2024-12-05 21:21:46.861221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.926 [2024-12-05 21:21:46.861238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 21:21:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.927 [2024-12-05 21:21:46.861389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.861406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 21:21:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:38.927 [2024-12-05 21:21:46.861584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.861605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.861784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.861801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 21:21:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.927 [2024-12-05 21:21:46.861953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.861969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.862118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.862134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.862342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.862360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.862522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.862541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.862704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.862721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.862900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.862922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.863081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.863100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.863283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.863298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.863437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.863450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.863701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.863717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.863960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.863977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.864071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.864084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.864182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.864196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.864382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.864407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.864578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.864593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.864799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.864814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.864966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.864981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.865117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.865135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.927 [2024-12-05 21:21:46.865328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.927 [2024-12-05 21:21:46.865344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.927 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.865491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.865507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.865777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.865798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.865961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.865977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.866145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.866159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.866361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.866384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.866558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.866574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.866710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.866725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.866977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.866993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.867164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.867183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.867390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.867408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.867623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.867638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.867795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.867814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.868053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.868070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.868275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.868291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.868566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.868588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.868742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.868757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.868930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.868946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.869122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.869145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.869295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.869312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.869528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.869545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.869813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.869831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.869932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.869946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.870097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.870112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.870315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.870329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.870553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.870573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.870754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.870771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.871000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.871017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.928 [2024-12-05 21:21:46.871164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.928 [2024-12-05 21:21:46.871180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.928 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.871343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.871360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.871520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.871537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.871740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.871754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.871905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.871922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.872082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.872098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.872313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.872329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.872623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.872642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.872762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.872779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.873012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.873028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.873181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.873196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.873283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.873296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.873548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.873567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.873721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.873740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.873877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.873894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.874157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.874176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.874415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.874430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.874665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.874700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.874900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.874932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.875172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.875204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.875434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.875467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.875714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.875747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.875942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.875975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.876128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.876154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.876374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.876394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.876566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.876584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.876684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.876700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.876910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.876928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.877108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.877130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.877312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.877329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.877493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.877516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.877730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.877745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.877899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.877914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.878062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.878080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.878295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.878312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.878527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.878545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.878687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.878704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.878921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.929 [2024-12-05 21:21:46.878937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.929 qpair failed and we were unable to recover it. 00:28:38.929 [2024-12-05 21:21:46.879163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.879178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.879361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.879385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.879539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.879555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.879639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.879654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.879855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.879871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.880023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.880041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.880253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.880269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.880421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.880438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.880580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.880596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.880748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.880767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.880844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.880857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.880998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.881013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.881112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.881150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.881383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.881403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.881643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.881659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.881757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.881770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.881882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.881896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.882106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.882124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.882350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.882366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.882601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.882619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.882733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.882749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.883004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.883024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.883241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.883255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.883403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.883419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.883620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.883638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.883784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.883800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.884049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.884066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.884252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.884270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.884442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.884459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.884599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.884612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.884721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.884734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.884813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.884825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.885027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.885048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.885263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.885279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.885428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.885445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.885592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.885610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.885749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.885765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.930 [2024-12-05 21:21:46.885853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.930 [2024-12-05 21:21:46.885864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.930 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.886038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.886051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.886126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.886139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.886226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.886239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.886408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.886425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.886581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.886597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.886679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.886692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.886807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.886822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.886987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.887004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.887111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.887127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.887278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.887304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.887506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.887521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.887701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.887719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.887821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.887836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.888041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.888056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.888266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.888281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.888398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.888415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.888494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.888508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.888588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.888601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.888678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.888689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.888758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.888768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.888910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.888923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.889018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.889032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.889098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.889111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.889313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.889328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.889473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.889490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.889698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.889715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.889790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.889804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.889952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.889969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.890104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.890116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.890276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.890290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.890380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.890393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.890541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.890559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.890704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.890719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.890812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.890825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.931 qpair failed and we were unable to recover it. 00:28:38.931 [2024-12-05 21:21:46.890899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.931 [2024-12-05 21:21:46.890916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.890996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.891009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.891143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.891158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.891222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.891235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.891320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.891333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.891430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.891445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.891580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.891597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.891803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.891824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.891973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.891992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.892202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.892220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.892289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.892301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.892505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.892520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.892659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.892674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.892748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.892761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.892901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.892916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.893056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.893073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.893159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.893173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.893354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.893378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.893450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.893466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.893612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.893627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.893711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.893723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.893807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.893819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.893954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.893969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.894108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.894124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.894275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.894290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.894439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.894455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.894594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.894609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.894704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.894718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.894872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.894888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.894996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.895010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.895097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.895109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.932 [2024-12-05 21:21:46.895196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.932 [2024-12-05 21:21:46.895207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.932 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.895378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.895393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.895529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.895546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.895630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.895644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.895744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.895758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.895892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.895907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.895999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.896013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.896097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.896111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.896208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.896222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.896390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.896412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.896494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.896507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.896640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.896653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.896862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.896876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.897013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.897030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.897116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.897130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.897276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.897291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.897502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.897521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.897677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.897693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.897830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.897844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.897999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.898014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.898163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.898176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.898260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.898273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.898379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.898397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.898557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.898572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.898657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.898671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.898831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.898846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.898920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.898935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.899092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.899108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.899177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.899190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.899358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.899378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.899444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.899455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.899538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.899550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.899613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.899624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.899698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.899711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.899880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.899897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.899970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.933 [2024-12-05 21:21:46.899985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.933 qpair failed and we were unable to recover it. 00:28:38.933 [2024-12-05 21:21:46.900219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.900234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.900311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.900325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.900471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.900488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.900612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.900628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.900710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.900724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.900821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.900834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.900896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.900907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.901008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.901020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.901098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.901110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.901266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.901282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.901444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.901463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.901600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.901615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.901782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.901799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.902005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.902028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.902185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.902201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.902346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.902360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.902512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.902525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.902619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.902633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.902788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.902805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.902942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.902957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.903101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.903116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.903254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.903268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.903501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.903520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.903597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.903610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.903770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.903783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.903943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.903958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.904038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.904053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.904200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.904215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.904358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.904380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.904476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.904491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.904653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.904668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.904900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.904921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.905059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.934 [2024-12-05 21:21:46.905080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.934 qpair failed and we were unable to recover it. 00:28:38.934 [2024-12-05 21:21:46.905180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.905193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.905326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.905340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.905414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.905428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.905500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.905514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.905592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.905605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.905738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.905752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.905965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.905981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.906154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.906178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.906318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.906336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.906437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.906455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.906559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.906575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.906665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.906682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.906835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.906855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.906996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.907013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.907152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.907170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.907253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.907267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.907500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.907516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.907673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.907689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.907771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.907785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.907949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.907965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.908062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.908076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.908312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.908331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.908487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.908503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.908663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.908677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.908824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.908836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.908925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.908938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.909022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.909037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.909181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.909196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.909380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.909398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.909488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.909502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.909658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.909673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.909894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.909914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.909987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.909999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.910145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.910158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.910316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.910331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.910492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.910490] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:28:38.935 [2024-12-05 21:21:46.910515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.910537] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.935 [2024-12-05 21:21:46.910668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.935 [2024-12-05 21:21:46.910687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.935 qpair failed and we were unable to recover it. 00:28:38.935 [2024-12-05 21:21:46.910836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.910850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.911012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.911027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.911110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.911125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.911207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.911222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.911427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.911444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.911521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.911533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.911615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.911627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.911692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.911704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.911859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.911877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.912023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.912039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.912273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.912290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.912385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.912403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.912505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.912521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.912729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.912748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.912839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.912854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.913019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.913033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.913119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.913134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.913281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.913300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.913457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.913475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.913630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.913646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.913723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.913739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.913820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.913835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.913976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.914008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.914092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.914108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.914183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.914197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.914274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.914294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.914442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.914459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.914540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.914554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.914701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.914721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.914867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.914884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.914976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.914992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.915065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.915081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.915169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.915184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.915269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.915283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.915484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.915505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.915739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.915755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.915851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.936 [2024-12-05 21:21:46.915863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.936 qpair failed and we were unable to recover it. 00:28:38.936 [2024-12-05 21:21:46.916022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.916037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.916131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.916146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.916242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.916256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.916435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.916452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.916626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.916643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.916806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.916824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.917066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.917082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.917287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.917301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.917393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.917409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.917669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.917687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.917831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.917846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.918002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.918018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.918114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.918131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.918221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.918237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.918496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.918514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.918695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.918709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.918862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.918879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.919123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.919138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.919291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.919306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.919544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.919566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.919744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.919760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.919910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.919923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.920021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.920036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.920173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.920190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.920332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.920348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.920446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.920466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.920697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.920713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.920888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.920905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.921043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.921062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.937 [2024-12-05 21:21:46.921142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.937 [2024-12-05 21:21:46.921159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.937 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.921274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.921290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.921396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.921414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.921582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.921603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.921750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.921768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.921866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.921882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.922000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.922014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.922094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.922113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.922250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.922264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.922410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.922427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.922532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.922547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.922687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.922703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.922768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.922781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.922916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.922932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.923013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.923026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.923165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.923181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.923376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.923391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.923456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.923467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.923525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.923536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.923604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.923617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.923710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.923725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.923806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.923821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.923960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.923976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.924139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.924154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.924305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.924322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.924455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.924474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.924557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.924571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.924649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.924664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.924754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.924767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.924867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.924880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.924961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.924974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.925038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.925052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.925209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.925227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.925372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.925389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.925482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.925497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.925636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.925651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.925726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.938 [2024-12-05 21:21:46.925745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.938 qpair failed and we were unable to recover it. 00:28:38.938 [2024-12-05 21:21:46.925952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.925969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.926119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.926134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.926314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.926327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.926473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.926488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.926584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.926600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.926695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.926710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.926850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.926865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.926940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.926961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.927044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.927058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.927195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.927210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.927280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.927293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.927526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.927546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.927701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.927715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.927798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.927809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.927893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.927907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.928042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.928059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.928221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.928236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.928304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.928317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.928402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.928419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.928512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.928527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.928763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.928782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.928932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.928948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.929172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.929187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.929288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.929300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.929438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.929456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.929594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.929610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.929703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.929719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.929932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.929948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.930162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.930180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.930416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.930434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.930539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.930552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.930728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.930743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.930947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.939 [2024-12-05 21:21:46.930966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.939 qpair failed and we were unable to recover it. 00:28:38.939 [2024-12-05 21:21:46.931122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.931138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.931344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.931360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.931575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.931596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.931671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.931687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.931834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.931847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.932074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.932090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.932178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.932198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.932347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.932363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.932556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.932572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.932671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.932686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.932921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.932940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.933129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.933145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.933227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.933244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.933406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.933421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.933501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.933515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.933676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.933694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.933904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.933919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.934058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.934073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.934226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.934241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.940 qpair failed and we were unable to recover it. 00:28:38.940 [2024-12-05 21:21:46.934408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.940 [2024-12-05 21:21:46.934427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.941 qpair failed and we were unable to recover it. 00:28:38.941 [2024-12-05 21:21:46.934506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.941 [2024-12-05 21:21:46.934520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.941 qpair failed and we were unable to recover it. 00:28:38.941 [2024-12-05 21:21:46.934654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.941 [2024-12-05 21:21:46.934667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.941 qpair failed and we were unable to recover it. 00:28:38.941 [2024-12-05 21:21:46.934866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.941 [2024-12-05 21:21:46.934881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.941 qpair failed and we were unable to recover it. 00:28:38.941 [2024-12-05 21:21:46.935041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.941 [2024-12-05 21:21:46.935059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.941 qpair failed and we were unable to recover it. 00:28:38.941 [2024-12-05 21:21:46.935293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.941 [2024-12-05 21:21:46.935309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.941 qpair failed and we were unable to recover it. 00:28:38.941 [2024-12-05 21:21:46.935458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.941 [2024-12-05 21:21:46.935476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.941 qpair failed and we were unable to recover it. 00:28:38.941 [2024-12-05 21:21:46.935561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.941 [2024-12-05 21:21:46.935575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.941 qpair failed and we were unable to recover it. 00:28:38.941 [2024-12-05 21:21:46.935669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.941 [2024-12-05 21:21:46.935684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.941 qpair failed and we were unable to recover it. 00:28:38.941 [2024-12-05 21:21:46.935823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.941 [2024-12-05 21:21:46.935839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.941 qpair failed and we were unable to recover it. 00:28:38.941 [2024-12-05 21:21:46.935976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.941 [2024-12-05 21:21:46.935995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.941 qpair failed and we were unable to recover it. 00:28:38.941 [2024-12-05 21:21:46.936082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.941 [2024-12-05 21:21:46.936099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.941 qpair failed and we were unable to recover it. 00:28:38.942 [2024-12-05 21:21:46.936306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.942 [2024-12-05 21:21:46.936323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.942 qpair failed and we were unable to recover it. 00:28:38.942 [2024-12-05 21:21:46.936472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.942 [2024-12-05 21:21:46.936490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.942 qpair failed and we were unable to recover it. 00:28:38.942 [2024-12-05 21:21:46.936600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.942 [2024-12-05 21:21:46.936657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:38.942 qpair failed and we were unable to recover it. 00:28:38.942 [2024-12-05 21:21:46.936887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.942 [2024-12-05 21:21:46.936925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.942 qpair failed and we were unable to recover it. 00:28:38.942 [2024-12-05 21:21:46.937045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.942 [2024-12-05 21:21:46.937080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:38.942 qpair failed and we were unable to recover it. 00:28:38.942 [2024-12-05 21:21:46.937211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.942 [2024-12-05 21:21:46.937235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.942 qpair failed and we were unable to recover it. 00:28:38.942 [2024-12-05 21:21:46.937341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.942 [2024-12-05 21:21:46.937357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.942 qpair failed and we were unable to recover it. 00:28:38.942 [2024-12-05 21:21:46.937542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.943 [2024-12-05 21:21:46.937560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.943 qpair failed and we were unable to recover it. 00:28:38.943 [2024-12-05 21:21:46.937700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.943 [2024-12-05 21:21:46.937718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.943 qpair failed and we were unable to recover it. 00:28:38.943 [2024-12-05 21:21:46.937788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.943 [2024-12-05 21:21:46.937802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.943 qpair failed and we were unable to recover it. 00:28:38.943 [2024-12-05 21:21:46.937954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.943 [2024-12-05 21:21:46.937969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.943 qpair failed and we were unable to recover it. 00:28:38.943 [2024-12-05 21:21:46.938113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.943 [2024-12-05 21:21:46.938126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.943 qpair failed and we were unable to recover it. 00:28:38.943 [2024-12-05 21:21:46.938213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.944 [2024-12-05 21:21:46.938226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.944 qpair failed and we were unable to recover it. 00:28:38.944 [2024-12-05 21:21:46.938313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.944 [2024-12-05 21:21:46.938327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.944 qpair failed and we were unable to recover it. 00:28:38.944 [2024-12-05 21:21:46.938478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.944 [2024-12-05 21:21:46.938497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.944 qpair failed and we were unable to recover it. 00:28:38.944 [2024-12-05 21:21:46.938657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.944 [2024-12-05 21:21:46.938676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.944 qpair failed and we were unable to recover it. 00:28:38.944 [2024-12-05 21:21:46.938759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.944 [2024-12-05 21:21:46.938775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.944 qpair failed and we were unable to recover it. 00:28:38.944 [2024-12-05 21:21:46.938918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.944 [2024-12-05 21:21:46.938934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.944 qpair failed and we were unable to recover it. 00:28:38.944 [2024-12-05 21:21:46.939086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.945 [2024-12-05 21:21:46.939102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.945 qpair failed and we were unable to recover it. 00:28:38.945 [2024-12-05 21:21:46.939273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.945 [2024-12-05 21:21:46.939289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.945 qpair failed and we were unable to recover it. 00:28:38.945 [2024-12-05 21:21:46.939440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.945 [2024-12-05 21:21:46.939455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.945 qpair failed and we were unable to recover it. 00:28:38.945 [2024-12-05 21:21:46.939622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.945 [2024-12-05 21:21:46.939635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.945 qpair failed and we were unable to recover it. 00:28:38.945 [2024-12-05 21:21:46.939718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.945 [2024-12-05 21:21:46.939731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.945 qpair failed and we were unable to recover it. 00:28:38.945 [2024-12-05 21:21:46.939835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.945 [2024-12-05 21:21:46.939851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.945 qpair failed and we were unable to recover it. 00:28:38.945 [2024-12-05 21:21:46.939986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.945 [2024-12-05 21:21:46.940001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.945 qpair failed and we were unable to recover it. 00:28:38.945 [2024-12-05 21:21:46.940157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.946 [2024-12-05 21:21:46.940172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.946 qpair failed and we were unable to recover it. 00:28:38.946 [2024-12-05 21:21:46.940240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.946 [2024-12-05 21:21:46.940264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.946 qpair failed and we were unable to recover it. 00:28:38.946 [2024-12-05 21:21:46.940416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.946 [2024-12-05 21:21:46.940434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.946 qpair failed and we were unable to recover it. 00:28:38.946 [2024-12-05 21:21:46.940596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.946 [2024-12-05 21:21:46.940614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.946 qpair failed and we were unable to recover it. 00:28:38.946 [2024-12-05 21:21:46.940700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.946 [2024-12-05 21:21:46.940715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.946 qpair failed and we were unable to recover it. 00:28:38.946 [2024-12-05 21:21:46.940865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.946 [2024-12-05 21:21:46.940879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.946 qpair failed and we were unable to recover it. 00:28:38.946 [2024-12-05 21:21:46.940955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.946 [2024-12-05 21:21:46.940966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.946 qpair failed and we were unable to recover it. 00:28:38.946 [2024-12-05 21:21:46.941042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.946 [2024-12-05 21:21:46.941054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.947 qpair failed and we were unable to recover it. 00:28:38.947 [2024-12-05 21:21:46.941106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.947 [2024-12-05 21:21:46.941117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.947 qpair failed and we were unable to recover it. 00:28:38.947 [2024-12-05 21:21:46.941268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.947 [2024-12-05 21:21:46.941286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.947 qpair failed and we were unable to recover it. 00:28:38.947 [2024-12-05 21:21:46.941429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.947 [2024-12-05 21:21:46.941447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.947 qpair failed and we were unable to recover it. 00:28:38.947 [2024-12-05 21:21:46.941546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.947 [2024-12-05 21:21:46.941567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.947 qpair failed and we were unable to recover it. 00:28:38.947 [2024-12-05 21:21:46.941649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.947 [2024-12-05 21:21:46.941664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.947 qpair failed and we were unable to recover it. 00:28:38.947 [2024-12-05 21:21:46.941815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.947 [2024-12-05 21:21:46.941830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.947 qpair failed and we were unable to recover it. 00:28:38.947 [2024-12-05 21:21:46.941911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.947 [2024-12-05 21:21:46.941926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.947 qpair failed and we were unable to recover it. 00:28:38.947 [2024-12-05 21:21:46.942012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.947 [2024-12-05 21:21:46.942026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.948 qpair failed and we were unable to recover it. 00:28:38.948 [2024-12-05 21:21:46.942122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.948 [2024-12-05 21:21:46.942136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.948 qpair failed and we were unable to recover it. 00:28:38.948 [2024-12-05 21:21:46.942226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.948 [2024-12-05 21:21:46.942240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.948 qpair failed and we were unable to recover it. 00:28:38.948 [2024-12-05 21:21:46.942470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.948 [2024-12-05 21:21:46.942489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.948 qpair failed and we were unable to recover it. 00:28:38.948 [2024-12-05 21:21:46.942630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.949 [2024-12-05 21:21:46.942642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.949 qpair failed and we were unable to recover it. 00:28:38.949 [2024-12-05 21:21:46.942709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.949 [2024-12-05 21:21:46.942721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.949 qpair failed and we were unable to recover it. 00:28:38.949 [2024-12-05 21:21:46.942785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.949 [2024-12-05 21:21:46.942798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.949 qpair failed and we were unable to recover it. 00:28:38.950 [2024-12-05 21:21:46.942981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.950 [2024-12-05 21:21:46.942997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.950 qpair failed and we were unable to recover it. 00:28:38.950 [2024-12-05 21:21:46.943068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.950 [2024-12-05 21:21:46.943080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.950 qpair failed and we were unable to recover it. 00:28:38.950 [2024-12-05 21:21:46.943166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.950 [2024-12-05 21:21:46.943182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.950 qpair failed and we were unable to recover it. 00:28:38.950 [2024-12-05 21:21:46.943268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.950 [2024-12-05 21:21:46.943281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.950 qpair failed and we were unable to recover it. 00:28:38.950 [2024-12-05 21:21:46.943354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.950 [2024-12-05 21:21:46.943372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.950 qpair failed and we were unable to recover it. 00:28:38.951 [2024-12-05 21:21:46.943445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.951 [2024-12-05 21:21:46.943459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.951 qpair failed and we were unable to recover it. 00:28:38.951 [2024-12-05 21:21:46.943552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.951 [2024-12-05 21:21:46.943565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.951 qpair failed and we were unable to recover it. 00:28:38.951 [2024-12-05 21:21:46.943644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.951 [2024-12-05 21:21:46.943657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.951 qpair failed and we were unable to recover it. 00:28:38.951 [2024-12-05 21:21:46.943743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.951 [2024-12-05 21:21:46.943761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.952 qpair failed and we were unable to recover it. 00:28:38.952 [2024-12-05 21:21:46.943837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.952 [2024-12-05 21:21:46.943849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.952 qpair failed and we were unable to recover it. 00:28:38.952 [2024-12-05 21:21:46.943983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.952 [2024-12-05 21:21:46.943996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.952 qpair failed and we were unable to recover it. 00:28:38.952 [2024-12-05 21:21:46.944130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.952 [2024-12-05 21:21:46.944144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.952 qpair failed and we were unable to recover it. 00:28:38.952 [2024-12-05 21:21:46.944212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.952 [2024-12-05 21:21:46.944224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.952 qpair failed and we were unable to recover it. 00:28:38.952 [2024-12-05 21:21:46.944389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.952 [2024-12-05 21:21:46.944408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.952 qpair failed and we were unable to recover it. 00:28:38.953 [2024-12-05 21:21:46.944552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.953 [2024-12-05 21:21:46.944565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.953 qpair failed and we were unable to recover it. 00:28:38.953 [2024-12-05 21:21:46.944712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.953 [2024-12-05 21:21:46.944727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.953 qpair failed and we were unable to recover it. 00:28:38.953 [2024-12-05 21:21:46.944809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.953 [2024-12-05 21:21:46.944824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.953 qpair failed and we were unable to recover it. 00:28:38.953 [2024-12-05 21:21:46.944903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.953 [2024-12-05 21:21:46.944917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.953 qpair failed and we were unable to recover it. 00:28:38.953 [2024-12-05 21:21:46.945071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.954 [2024-12-05 21:21:46.945088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.954 qpair failed and we were unable to recover it. 00:28:38.954 [2024-12-05 21:21:46.945173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.954 [2024-12-05 21:21:46.945186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.954 qpair failed and we were unable to recover it. 00:28:38.954 [2024-12-05 21:21:46.945253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.954 [2024-12-05 21:21:46.945266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.954 qpair failed and we were unable to recover it. 00:28:38.954 [2024-12-05 21:21:46.945343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.954 [2024-12-05 21:21:46.945356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.954 qpair failed and we were unable to recover it. 00:28:38.955 [2024-12-05 21:21:46.945447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.955 [2024-12-05 21:21:46.945459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.955 qpair failed and we were unable to recover it. 00:28:38.955 [2024-12-05 21:21:46.945617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.955 [2024-12-05 21:21:46.945632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.955 qpair failed and we were unable to recover it. 00:28:38.955 [2024-12-05 21:21:46.945719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.955 [2024-12-05 21:21:46.945742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.955 qpair failed and we were unable to recover it. 00:28:38.955 [2024-12-05 21:21:46.945815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.955 [2024-12-05 21:21:46.945829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.955 qpair failed and we were unable to recover it. 00:28:38.955 [2024-12-05 21:21:46.945966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.955 [2024-12-05 21:21:46.945979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.955 qpair failed and we were unable to recover it. 00:28:38.955 [2024-12-05 21:21:46.946068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.955 [2024-12-05 21:21:46.946083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.955 qpair failed and we were unable to recover it. 00:28:38.955 [2024-12-05 21:21:46.946174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.955 [2024-12-05 21:21:46.946187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.955 qpair failed and we were unable to recover it. 00:28:38.955 [2024-12-05 21:21:46.946281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.955 [2024-12-05 21:21:46.946295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.955 qpair failed and we were unable to recover it. 00:28:38.956 [2024-12-05 21:21:46.946430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.956 [2024-12-05 21:21:46.946446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.956 qpair failed and we were unable to recover it. 00:28:38.956 [2024-12-05 21:21:46.946607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.956 [2024-12-05 21:21:46.946634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.956 qpair failed and we were unable to recover it. 00:28:38.956 [2024-12-05 21:21:46.946724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.956 [2024-12-05 21:21:46.946737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.956 qpair failed and we were unable to recover it. 00:28:38.956 [2024-12-05 21:21:46.946950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.956 [2024-12-05 21:21:46.946966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.956 qpair failed and we were unable to recover it. 00:28:38.956 [2024-12-05 21:21:46.947065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.956 [2024-12-05 21:21:46.947077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.956 qpair failed and we were unable to recover it. 00:28:38.956 [2024-12-05 21:21:46.947241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.956 [2024-12-05 21:21:46.947257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.956 qpair failed and we were unable to recover it. 00:28:38.956 [2024-12-05 21:21:46.947330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.956 [2024-12-05 21:21:46.947344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.956 qpair failed and we were unable to recover it. 00:28:38.956 [2024-12-05 21:21:46.947454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.956 [2024-12-05 21:21:46.947468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.956 qpair failed and we were unable to recover it. 00:28:38.956 [2024-12-05 21:21:46.947552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.957 [2024-12-05 21:21:46.947567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.957 qpair failed and we were unable to recover it. 00:28:38.957 [2024-12-05 21:21:46.947652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.957 [2024-12-05 21:21:46.947666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.957 qpair failed and we were unable to recover it. 00:28:38.957 [2024-12-05 21:21:46.947745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.957 [2024-12-05 21:21:46.947759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.957 qpair failed and we were unable to recover it. 00:28:38.957 [2024-12-05 21:21:46.947831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.957 [2024-12-05 21:21:46.947844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.957 qpair failed and we were unable to recover it. 00:28:38.957 [2024-12-05 21:21:46.947938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.957 [2024-12-05 21:21:46.947951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.957 qpair failed and we were unable to recover it. 00:28:38.957 [2024-12-05 21:21:46.948027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.957 [2024-12-05 21:21:46.948041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.958 qpair failed and we were unable to recover it. 00:28:38.958 [2024-12-05 21:21:46.948186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.958 [2024-12-05 21:21:46.948203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.958 qpair failed and we were unable to recover it. 00:28:38.958 [2024-12-05 21:21:46.948284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.958 [2024-12-05 21:21:46.948297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.958 qpair failed and we were unable to recover it. 00:28:38.958 [2024-12-05 21:21:46.948362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.958 [2024-12-05 21:21:46.948381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.958 qpair failed and we were unable to recover it. 00:28:38.958 [2024-12-05 21:21:46.948474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.958 [2024-12-05 21:21:46.948487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.958 qpair failed and we were unable to recover it. 00:28:38.958 [2024-12-05 21:21:46.948637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.958 [2024-12-05 21:21:46.948653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.958 qpair failed and we were unable to recover it. 00:28:38.958 [2024-12-05 21:21:46.948744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.958 [2024-12-05 21:21:46.948757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.958 qpair failed and we were unable to recover it. 00:28:38.958 [2024-12-05 21:21:46.948834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.958 [2024-12-05 21:21:46.948848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.958 qpair failed and we were unable to recover it. 00:28:38.958 [2024-12-05 21:21:46.948997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.958 [2024-12-05 21:21:46.949012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.958 qpair failed and we were unable to recover it. 00:28:38.958 [2024-12-05 21:21:46.949190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.958 [2024-12-05 21:21:46.949205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.958 qpair failed and we were unable to recover it. 00:28:38.958 [2024-12-05 21:21:46.949350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.959 [2024-12-05 21:21:46.949372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.959 qpair failed and we were unable to recover it. 00:28:38.959 [2024-12-05 21:21:46.949444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.959 [2024-12-05 21:21:46.949459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.959 qpair failed and we were unable to recover it. 00:28:38.959 [2024-12-05 21:21:46.949608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.959 [2024-12-05 21:21:46.949625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.959 qpair failed and we were unable to recover it. 00:28:38.959 [2024-12-05 21:21:46.949773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.959 [2024-12-05 21:21:46.949789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.959 qpair failed and we were unable to recover it. 00:28:38.959 [2024-12-05 21:21:46.949924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.959 [2024-12-05 21:21:46.949938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.959 qpair failed and we were unable to recover it. 00:28:38.960 [2024-12-05 21:21:46.950176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.960 [2024-12-05 21:21:46.950193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.960 qpair failed and we were unable to recover it. 00:28:38.960 [2024-12-05 21:21:46.950354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.960 [2024-12-05 21:21:46.950376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.960 qpair failed and we were unable to recover it. 00:28:38.960 [2024-12-05 21:21:46.950515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.960 [2024-12-05 21:21:46.950529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.960 qpair failed and we were unable to recover it. 00:28:38.960 [2024-12-05 21:21:46.950622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.960 [2024-12-05 21:21:46.950635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.960 qpair failed and we were unable to recover it. 00:28:38.960 [2024-12-05 21:21:46.950777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.960 [2024-12-05 21:21:46.950792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.960 qpair failed and we were unable to recover it. 00:28:38.960 [2024-12-05 21:21:46.950950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.960 [2024-12-05 21:21:46.950967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.960 qpair failed and we were unable to recover it. 00:28:38.961 [2024-12-05 21:21:46.951146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.961 [2024-12-05 21:21:46.951167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.961 qpair failed and we were unable to recover it. 00:28:38.961 [2024-12-05 21:21:46.951325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.961 [2024-12-05 21:21:46.951341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.961 qpair failed and we were unable to recover it. 00:28:38.961 [2024-12-05 21:21:46.951505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.961 [2024-12-05 21:21:46.951523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.961 qpair failed and we were unable to recover it. 00:28:38.961 [2024-12-05 21:21:46.951665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.961 [2024-12-05 21:21:46.951681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.961 qpair failed and we were unable to recover it. 00:28:38.961 [2024-12-05 21:21:46.951831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.961 [2024-12-05 21:21:46.951847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.961 qpair failed and we were unable to recover it. 00:28:38.961 [2024-12-05 21:21:46.951998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.961 [2024-12-05 21:21:46.952011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.961 qpair failed and we were unable to recover it. 00:28:38.961 [2024-12-05 21:21:46.952197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.961 [2024-12-05 21:21:46.952214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.961 qpair failed and we were unable to recover it. 00:28:38.962 [2024-12-05 21:21:46.952365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.962 [2024-12-05 21:21:46.952404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.962 qpair failed and we were unable to recover it. 00:28:38.962 [2024-12-05 21:21:46.952505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.962 [2024-12-05 21:21:46.952519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.962 qpair failed and we were unable to recover it. 00:28:38.962 [2024-12-05 21:21:46.952661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.962 [2024-12-05 21:21:46.952676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.962 qpair failed and we were unable to recover it. 00:28:38.962 [2024-12-05 21:21:46.952816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.962 [2024-12-05 21:21:46.952831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.962 qpair failed and we were unable to recover it. 00:28:38.962 [2024-12-05 21:21:46.953093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.962 [2024-12-05 21:21:46.953118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.962 qpair failed and we were unable to recover it. 00:28:38.962 [2024-12-05 21:21:46.953231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.963 [2024-12-05 21:21:46.953244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.963 qpair failed and we were unable to recover it. 00:28:38.963 [2024-12-05 21:21:46.953428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.963 [2024-12-05 21:21:46.953442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.963 qpair failed and we were unable to recover it. 00:28:38.963 [2024-12-05 21:21:46.953519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.963 [2024-12-05 21:21:46.953531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.963 qpair failed and we were unable to recover it. 00:28:38.963 [2024-12-05 21:21:46.953617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.963 [2024-12-05 21:21:46.953632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.963 qpair failed and we were unable to recover it. 00:28:38.963 [2024-12-05 21:21:46.953849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.963 [2024-12-05 21:21:46.953864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.963 qpair failed and we were unable to recover it. 00:28:38.963 [2024-12-05 21:21:46.954023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.963 [2024-12-05 21:21:46.954038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.963 qpair failed and we were unable to recover it. 00:28:38.963 [2024-12-05 21:21:46.954220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.964 [2024-12-05 21:21:46.954236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.964 qpair failed and we were unable to recover it. 00:28:38.964 [2024-12-05 21:21:46.954330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.964 [2024-12-05 21:21:46.954344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.964 qpair failed and we were unable to recover it. 00:28:38.964 [2024-12-05 21:21:46.954418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.964 [2024-12-05 21:21:46.954433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.964 qpair failed and we were unable to recover it. 00:28:38.964 [2024-12-05 21:21:46.954587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.964 [2024-12-05 21:21:46.954601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.964 qpair failed and we were unable to recover it. 00:28:38.964 [2024-12-05 21:21:46.954732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.964 [2024-12-05 21:21:46.954746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.964 qpair failed and we were unable to recover it. 00:28:38.964 [2024-12-05 21:21:46.954829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.964 [2024-12-05 21:21:46.954840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.965 qpair failed and we were unable to recover it. 00:28:38.965 [2024-12-05 21:21:46.954924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.965 [2024-12-05 21:21:46.954938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.965 qpair failed and we were unable to recover it. 00:28:38.965 [2024-12-05 21:21:46.955025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.965 [2024-12-05 21:21:46.955038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.965 qpair failed and we were unable to recover it. 00:28:38.965 [2024-12-05 21:21:46.955120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.965 [2024-12-05 21:21:46.955134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.965 qpair failed and we were unable to recover it. 00:28:38.965 [2024-12-05 21:21:46.955213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.965 [2024-12-05 21:21:46.955226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.965 qpair failed and we were unable to recover it. 00:28:38.965 [2024-12-05 21:21:46.955303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.965 [2024-12-05 21:21:46.955316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.965 qpair failed and we were unable to recover it. 00:28:38.965 [2024-12-05 21:21:46.955481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.965 [2024-12-05 21:21:46.955497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.965 qpair failed and we were unable to recover it. 00:28:38.965 [2024-12-05 21:21:46.955678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.965 [2024-12-05 21:21:46.955693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.965 qpair failed and we were unable to recover it. 00:28:38.965 [2024-12-05 21:21:46.955842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.966 [2024-12-05 21:21:46.955857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.966 qpair failed and we were unable to recover it. 00:28:38.966 [2024-12-05 21:21:46.956001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.966 [2024-12-05 21:21:46.956017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.966 qpair failed and we were unable to recover it. 00:28:38.966 [2024-12-05 21:21:46.956105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.966 [2024-12-05 21:21:46.956116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.966 qpair failed and we were unable to recover it. 00:28:38.966 [2024-12-05 21:21:46.956178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.966 [2024-12-05 21:21:46.956189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.966 qpair failed and we were unable to recover it. 00:28:38.966 [2024-12-05 21:21:46.956252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.966 [2024-12-05 21:21:46.956264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.966 qpair failed and we were unable to recover it. 00:28:38.966 [2024-12-05 21:21:46.956356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.966 [2024-12-05 21:21:46.956374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.966 qpair failed and we were unable to recover it. 00:28:38.966 [2024-12-05 21:21:46.956524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.966 [2024-12-05 21:21:46.956539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.966 qpair failed and we were unable to recover it. 00:28:38.967 [2024-12-05 21:21:46.956633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.967 [2024-12-05 21:21:46.956648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.967 qpair failed and we were unable to recover it. 00:28:38.967 [2024-12-05 21:21:46.956737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.967 [2024-12-05 21:21:46.956750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.967 qpair failed and we were unable to recover it. 00:28:38.967 [2024-12-05 21:21:46.956889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.967 [2024-12-05 21:21:46.956904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.967 qpair failed and we were unable to recover it. 00:28:38.967 [2024-12-05 21:21:46.957050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.967 [2024-12-05 21:21:46.957065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.967 qpair failed and we were unable to recover it. 00:28:38.967 [2024-12-05 21:21:46.957273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.967 [2024-12-05 21:21:46.957290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.967 qpair failed and we were unable to recover it. 00:28:38.967 [2024-12-05 21:21:46.957441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.967 [2024-12-05 21:21:46.957457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.967 qpair failed and we were unable to recover it. 00:28:38.967 [2024-12-05 21:21:46.957533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.968 [2024-12-05 21:21:46.957546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.968 qpair failed and we were unable to recover it. 00:28:38.968 [2024-12-05 21:21:46.957609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.968 [2024-12-05 21:21:46.957621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.968 qpair failed and we were unable to recover it. 00:28:38.968 [2024-12-05 21:21:46.957696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.968 [2024-12-05 21:21:46.957709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.968 qpair failed and we were unable to recover it. 00:28:38.968 [2024-12-05 21:21:46.957957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.968 [2024-12-05 21:21:46.957976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.968 qpair failed and we were unable to recover it. 00:28:38.968 [2024-12-05 21:21:46.958116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.968 [2024-12-05 21:21:46.958131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.968 qpair failed and we were unable to recover it. 00:28:38.968 [2024-12-05 21:21:46.958337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.968 [2024-12-05 21:21:46.958352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.968 qpair failed and we were unable to recover it. 00:28:38.968 [2024-12-05 21:21:46.958424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.968 [2024-12-05 21:21:46.958437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.968 qpair failed and we were unable to recover it. 00:28:38.969 [2024-12-05 21:21:46.958536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.969 [2024-12-05 21:21:46.958555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.969 qpair failed and we were unable to recover it. 00:28:38.969 [2024-12-05 21:21:46.958642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.969 [2024-12-05 21:21:46.958656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.969 qpair failed and we were unable to recover it. 00:28:38.969 [2024-12-05 21:21:46.958853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.969 [2024-12-05 21:21:46.958868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.969 qpair failed and we were unable to recover it. 00:28:38.969 [2024-12-05 21:21:46.958944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.969 [2024-12-05 21:21:46.958956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.969 qpair failed and we were unable to recover it. 00:28:38.969 [2024-12-05 21:21:46.959051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.969 [2024-12-05 21:21:46.959063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.969 qpair failed and we were unable to recover it. 00:28:38.969 [2024-12-05 21:21:46.959193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.969 [2024-12-05 21:21:46.959209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.969 qpair failed and we were unable to recover it. 00:28:38.969 [2024-12-05 21:21:46.959358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.969 [2024-12-05 21:21:46.959379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.969 qpair failed and we were unable to recover it. 00:28:38.969 [2024-12-05 21:21:46.959457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.969 [2024-12-05 21:21:46.959470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.969 qpair failed and we were unable to recover it. 00:28:38.970 [2024-12-05 21:21:46.959628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.970 [2024-12-05 21:21:46.959643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.970 qpair failed and we were unable to recover it. 00:28:38.970 [2024-12-05 21:21:46.959783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.970 [2024-12-05 21:21:46.959799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.970 qpair failed and we were unable to recover it. 00:28:38.970 [2024-12-05 21:21:46.959876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.970 [2024-12-05 21:21:46.959889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.970 qpair failed and we were unable to recover it. 00:28:38.970 [2024-12-05 21:21:46.959967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.970 [2024-12-05 21:21:46.959981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.970 qpair failed and we were unable to recover it. 00:28:38.970 [2024-12-05 21:21:46.960072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.970 [2024-12-05 21:21:46.960086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.970 qpair failed and we were unable to recover it. 00:28:38.970 [2024-12-05 21:21:46.960173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.970 [2024-12-05 21:21:46.960187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.970 qpair failed and we were unable to recover it. 00:28:38.970 [2024-12-05 21:21:46.960344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.970 [2024-12-05 21:21:46.960357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.970 qpair failed and we were unable to recover it. 00:28:38.970 [2024-12-05 21:21:46.960462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.970 [2024-12-05 21:21:46.960475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.970 qpair failed and we were unable to recover it. 00:28:38.970 [2024-12-05 21:21:46.960616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.971 [2024-12-05 21:21:46.960629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.971 qpair failed and we were unable to recover it. 00:28:38.971 [2024-12-05 21:21:46.960720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.971 [2024-12-05 21:21:46.960733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.971 qpair failed and we were unable to recover it. 00:28:38.971 [2024-12-05 21:21:46.960806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.971 [2024-12-05 21:21:46.960820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.971 qpair failed and we were unable to recover it. 00:28:38.971 [2024-12-05 21:21:46.960960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.971 [2024-12-05 21:21:46.960975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.971 qpair failed and we were unable to recover it. 00:28:38.971 [2024-12-05 21:21:46.961115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.971 [2024-12-05 21:21:46.961130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.971 qpair failed and we were unable to recover it. 00:28:38.971 [2024-12-05 21:21:46.961230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.971 [2024-12-05 21:21:46.961244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.971 qpair failed and we were unable to recover it. 00:28:38.971 [2024-12-05 21:21:46.961314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.971 [2024-12-05 21:21:46.961326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.971 qpair failed and we were unable to recover it. 00:28:38.971 [2024-12-05 21:21:46.961470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.971 [2024-12-05 21:21:46.961488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.971 qpair failed and we were unable to recover it. 00:28:38.971 [2024-12-05 21:21:46.961695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.971 [2024-12-05 21:21:46.961711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.972 qpair failed and we were unable to recover it. 00:28:38.972 [2024-12-05 21:21:46.961814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.972 [2024-12-05 21:21:46.961828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.972 qpair failed and we were unable to recover it. 00:28:38.972 [2024-12-05 21:21:46.961905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.972 [2024-12-05 21:21:46.961916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.972 qpair failed and we were unable to recover it. 00:28:38.972 [2024-12-05 21:21:46.962007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.972 [2024-12-05 21:21:46.962021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.972 qpair failed and we were unable to recover it. 00:28:38.972 [2024-12-05 21:21:46.962089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.972 [2024-12-05 21:21:46.962102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.972 qpair failed and we were unable to recover it. 00:28:38.972 [2024-12-05 21:21:46.962242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.972 [2024-12-05 21:21:46.962258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.972 qpair failed and we were unable to recover it. 00:28:38.972 [2024-12-05 21:21:46.962466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.972 [2024-12-05 21:21:46.962483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.972 qpair failed and we were unable to recover it. 00:28:38.972 [2024-12-05 21:21:46.962563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.972 [2024-12-05 21:21:46.962577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.972 qpair failed and we were unable to recover it. 00:28:38.972 [2024-12-05 21:21:46.962751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.972 [2024-12-05 21:21:46.962766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.973 qpair failed and we were unable to recover it. 00:28:38.973 [2024-12-05 21:21:46.962855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.973 [2024-12-05 21:21:46.962870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.973 qpair failed and we were unable to recover it. 00:28:38.973 [2024-12-05 21:21:46.962956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.973 [2024-12-05 21:21:46.962970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.973 qpair failed and we were unable to recover it. 00:28:38.973 [2024-12-05 21:21:46.963111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.973 [2024-12-05 21:21:46.963126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.973 qpair failed and we were unable to recover it. 00:28:38.973 [2024-12-05 21:21:46.963196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.973 [2024-12-05 21:21:46.963209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.973 qpair failed and we were unable to recover it. 00:28:38.973 [2024-12-05 21:21:46.963297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.973 [2024-12-05 21:21:46.963308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.973 qpair failed and we were unable to recover it. 00:28:38.973 [2024-12-05 21:21:46.963457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.973 [2024-12-05 21:21:46.963474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.973 qpair failed and we were unable to recover it. 00:28:38.973 [2024-12-05 21:21:46.963619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.973 [2024-12-05 21:21:46.963636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.973 qpair failed and we were unable to recover it. 00:28:38.973 [2024-12-05 21:21:46.963720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.973 [2024-12-05 21:21:46.963740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.973 qpair failed and we were unable to recover it. 00:28:38.973 [2024-12-05 21:21:46.963992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.973 [2024-12-05 21:21:46.964009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.973 qpair failed and we were unable to recover it. 00:28:38.973 [2024-12-05 21:21:46.964186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.973 [2024-12-05 21:21:46.964205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.973 qpair failed and we were unable to recover it. 00:28:38.973 [2024-12-05 21:21:46.964346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.973 [2024-12-05 21:21:46.964363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.973 qpair failed and we were unable to recover it. 00:28:38.973 [2024-12-05 21:21:46.964566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.974 [2024-12-05 21:21:46.964581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.974 qpair failed and we were unable to recover it. 00:28:38.974 [2024-12-05 21:21:46.964811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.974 [2024-12-05 21:21:46.964825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.974 qpair failed and we were unable to recover it. 00:28:38.974 [2024-12-05 21:21:46.964922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.974 [2024-12-05 21:21:46.964937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.974 qpair failed and we were unable to recover it. 00:28:38.974 [2024-12-05 21:21:46.965142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.974 [2024-12-05 21:21:46.965157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.974 qpair failed and we were unable to recover it. 00:28:38.974 [2024-12-05 21:21:46.965335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.974 [2024-12-05 21:21:46.965351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.974 qpair failed and we were unable to recover it. 00:28:38.974 [2024-12-05 21:21:46.965423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.974 [2024-12-05 21:21:46.965437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.974 qpair failed and we were unable to recover it. 00:28:38.974 [2024-12-05 21:21:46.965533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.974 [2024-12-05 21:21:46.965547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.974 qpair failed and we were unable to recover it. 00:28:38.974 [2024-12-05 21:21:46.965682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.975 [2024-12-05 21:21:46.965698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.975 qpair failed and we were unable to recover it. 00:28:38.975 [2024-12-05 21:21:46.965774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.975 [2024-12-05 21:21:46.965787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.975 qpair failed and we were unable to recover it. 00:28:38.975 [2024-12-05 21:21:46.965958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.975 [2024-12-05 21:21:46.965976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.975 qpair failed and we were unable to recover it. 00:28:38.975 [2024-12-05 21:21:46.966144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.975 [2024-12-05 21:21:46.966160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.975 qpair failed and we were unable to recover it. 00:28:38.975 [2024-12-05 21:21:46.966312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.975 [2024-12-05 21:21:46.966328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.975 qpair failed and we were unable to recover it. 00:28:38.975 [2024-12-05 21:21:46.966413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.975 [2024-12-05 21:21:46.966431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.975 qpair failed and we were unable to recover it. 00:28:38.975 [2024-12-05 21:21:46.966529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.975 [2024-12-05 21:21:46.966543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.975 qpair failed and we were unable to recover it. 00:28:38.975 [2024-12-05 21:21:46.966701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.976 [2024-12-05 21:21:46.966715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.976 qpair failed and we were unable to recover it. 00:28:38.976 [2024-12-05 21:21:46.966915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.976 [2024-12-05 21:21:46.966933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.976 qpair failed and we were unable to recover it. 00:28:38.976 [2024-12-05 21:21:46.967089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.976 [2024-12-05 21:21:46.967104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.976 qpair failed and we were unable to recover it. 00:28:38.976 [2024-12-05 21:21:46.967257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.976 [2024-12-05 21:21:46.967272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.976 qpair failed and we were unable to recover it. 00:28:38.976 [2024-12-05 21:21:46.967439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.977 [2024-12-05 21:21:46.967455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.977 qpair failed and we were unable to recover it. 00:28:38.977 [2024-12-05 21:21:46.967681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.977 [2024-12-05 21:21:46.967701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.977 qpair failed and we were unable to recover it. 00:28:38.977 [2024-12-05 21:21:46.967855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.977 [2024-12-05 21:21:46.967872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.977 qpair failed and we were unable to recover it. 00:28:38.977 [2024-12-05 21:21:46.968031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.977 [2024-12-05 21:21:46.968045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.977 qpair failed and we were unable to recover it. 00:28:38.977 [2024-12-05 21:21:46.968119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.977 [2024-12-05 21:21:46.968131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.977 qpair failed and we were unable to recover it. 00:28:38.977 [2024-12-05 21:21:46.968285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.977 [2024-12-05 21:21:46.968301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.978 qpair failed and we were unable to recover it. 00:28:38.978 [2024-12-05 21:21:46.968397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.978 [2024-12-05 21:21:46.968414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.978 qpair failed and we were unable to recover it. 00:28:38.978 [2024-12-05 21:21:46.968487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.978 [2024-12-05 21:21:46.968502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.978 qpair failed and we were unable to recover it. 00:28:38.978 [2024-12-05 21:21:46.968586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.978 [2024-12-05 21:21:46.968599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.978 qpair failed and we were unable to recover it. 00:28:38.978 [2024-12-05 21:21:46.968683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.978 [2024-12-05 21:21:46.968696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.978 qpair failed and we were unable to recover it. 00:28:38.978 [2024-12-05 21:21:46.968863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.978 [2024-12-05 21:21:46.968879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.978 qpair failed and we were unable to recover it. 00:28:38.978 [2024-12-05 21:21:46.968950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.978 [2024-12-05 21:21:46.968963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.978 qpair failed and we were unable to recover it. 00:28:38.978 [2024-12-05 21:21:46.969032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.978 [2024-12-05 21:21:46.969047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.978 qpair failed and we were unable to recover it. 00:28:38.978 [2024-12-05 21:21:46.969130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.979 [2024-12-05 21:21:46.969146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.979 qpair failed and we were unable to recover it. 00:28:38.979 [2024-12-05 21:21:46.969289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.979 [2024-12-05 21:21:46.969305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.979 qpair failed and we were unable to recover it. 00:28:38.979 [2024-12-05 21:21:46.969439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.979 [2024-12-05 21:21:46.969453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.979 qpair failed and we were unable to recover it. 00:28:38.979 [2024-12-05 21:21:46.969628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.979 [2024-12-05 21:21:46.969641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.979 qpair failed and we were unable to recover it. 00:28:38.979 [2024-12-05 21:21:46.969738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.979 [2024-12-05 21:21:46.969753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.979 qpair failed and we were unable to recover it. 00:28:38.979 [2024-12-05 21:21:46.969836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.979 [2024-12-05 21:21:46.969856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.979 qpair failed and we were unable to recover it. 00:28:38.979 [2024-12-05 21:21:46.969943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.979 [2024-12-05 21:21:46.969957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.979 qpair failed and we were unable to recover it. 00:28:38.979 [2024-12-05 21:21:46.970046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.981 [2024-12-05 21:21:46.970060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.981 qpair failed and we were unable to recover it. 00:28:38.981 [2024-12-05 21:21:46.970199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.981 [2024-12-05 21:21:46.970214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.981 qpair failed and we were unable to recover it. 00:28:38.981 [2024-12-05 21:21:46.970349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.981 [2024-12-05 21:21:46.970363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.981 qpair failed and we were unable to recover it. 00:28:38.981 [2024-12-05 21:21:46.970449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.981 [2024-12-05 21:21:46.970464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.981 qpair failed and we were unable to recover it. 00:28:38.981 [2024-12-05 21:21:46.970598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.981 [2024-12-05 21:21:46.970614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.981 qpair failed and we were unable to recover it. 00:28:38.981 [2024-12-05 21:21:46.970768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.981 [2024-12-05 21:21:46.970783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.981 qpair failed and we were unable to recover it. 00:28:38.981 [2024-12-05 21:21:46.970940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.981 [2024-12-05 21:21:46.970952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.981 qpair failed and we were unable to recover it. 00:28:38.981 [2024-12-05 21:21:46.971079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.981 [2024-12-05 21:21:46.971093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.981 qpair failed and we were unable to recover it. 00:28:38.981 [2024-12-05 21:21:46.971247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.981 [2024-12-05 21:21:46.971264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.982 qpair failed and we were unable to recover it. 00:28:38.982 [2024-12-05 21:21:46.971432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.982 [2024-12-05 21:21:46.971450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.982 qpair failed and we were unable to recover it. 00:28:38.982 [2024-12-05 21:21:46.971591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.982 [2024-12-05 21:21:46.971606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.982 qpair failed and we were unable to recover it. 00:28:38.982 [2024-12-05 21:21:46.971786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.982 [2024-12-05 21:21:46.971802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.982 qpair failed and we were unable to recover it. 00:28:38.982 [2024-12-05 21:21:46.971878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.982 [2024-12-05 21:21:46.971893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.982 qpair failed and we were unable to recover it. 00:28:38.982 [2024-12-05 21:21:46.972040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.982 [2024-12-05 21:21:46.972057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.982 qpair failed and we were unable to recover it. 00:28:38.982 [2024-12-05 21:21:46.972142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.982 [2024-12-05 21:21:46.972156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.982 qpair failed and we were unable to recover it. 00:28:38.982 [2024-12-05 21:21:46.972303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.982 [2024-12-05 21:21:46.972317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.982 qpair failed and we were unable to recover it. 00:28:38.982 [2024-12-05 21:21:46.972484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.982 [2024-12-05 21:21:46.972497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.982 qpair failed and we were unable to recover it. 00:28:38.982 [2024-12-05 21:21:46.972563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.982 [2024-12-05 21:21:46.972576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.982 qpair failed and we were unable to recover it. 00:28:38.982 [2024-12-05 21:21:46.972733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.982 [2024-12-05 21:21:46.972749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.982 qpair failed and we were unable to recover it. 00:28:38.982 [2024-12-05 21:21:46.972842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.982 [2024-12-05 21:21:46.972857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.982 qpair failed and we were unable to recover it. 00:28:38.982 [2024-12-05 21:21:46.973013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.982 [2024-12-05 21:21:46.973027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.982 qpair failed and we were unable to recover it. 00:28:38.982 [2024-12-05 21:21:46.973179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.982 [2024-12-05 21:21:46.973194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.982 qpair failed and we were unable to recover it. 00:28:38.982 [2024-12-05 21:21:46.973286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.982 [2024-12-05 21:21:46.973300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.983 qpair failed and we were unable to recover it. 00:28:38.983 [2024-12-05 21:21:46.973391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.983 [2024-12-05 21:21:46.973406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.983 qpair failed and we were unable to recover it. 00:28:38.983 [2024-12-05 21:21:46.973582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.983 [2024-12-05 21:21:46.973598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.983 qpair failed and we were unable to recover it. 00:28:38.983 [2024-12-05 21:21:46.973755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.983 [2024-12-05 21:21:46.973768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.983 qpair failed and we were unable to recover it. 00:28:38.983 [2024-12-05 21:21:46.973915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.983 [2024-12-05 21:21:46.973927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.983 qpair failed and we were unable to recover it. 00:28:38.983 [2024-12-05 21:21:46.973999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.983 [2024-12-05 21:21:46.974012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.983 qpair failed and we were unable to recover it. 00:28:38.983 [2024-12-05 21:21:46.974180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.983 [2024-12-05 21:21:46.974197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.983 qpair failed and we were unable to recover it. 00:28:38.983 [2024-12-05 21:21:46.974348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.983 [2024-12-05 21:21:46.974362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.983 qpair failed and we were unable to recover it. 00:28:38.983 [2024-12-05 21:21:46.974460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.983 [2024-12-05 21:21:46.974475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.983 qpair failed and we were unable to recover it. 00:28:38.983 [2024-12-05 21:21:46.974619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.983 [2024-12-05 21:21:46.974635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.983 qpair failed and we were unable to recover it. 00:28:38.983 [2024-12-05 21:21:46.974713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.983 [2024-12-05 21:21:46.974727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.983 qpair failed and we were unable to recover it. 00:28:38.983 [2024-12-05 21:21:46.974881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.983 [2024-12-05 21:21:46.974898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.984 qpair failed and we were unable to recover it. 00:28:38.984 [2024-12-05 21:21:46.975072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.984 [2024-12-05 21:21:46.975088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.984 qpair failed and we were unable to recover it. 00:28:38.984 [2024-12-05 21:21:46.975238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.988 [2024-12-05 21:21:46.975251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.988 qpair failed and we were unable to recover it. 00:28:38.988 [2024-12-05 21:21:46.975393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.988 [2024-12-05 21:21:46.975408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.988 qpair failed and we were unable to recover it. 00:28:38.988 [2024-12-05 21:21:46.975563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.988 [2024-12-05 21:21:46.975581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.988 qpair failed and we were unable to recover it. 00:28:38.988 [2024-12-05 21:21:46.975677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.988 [2024-12-05 21:21:46.975695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.988 qpair failed and we were unable to recover it. 00:28:38.988 [2024-12-05 21:21:46.975791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.988 [2024-12-05 21:21:46.975805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.988 qpair failed and we were unable to recover it. 00:28:38.988 [2024-12-05 21:21:46.975888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.988 [2024-12-05 21:21:46.975903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.988 qpair failed and we were unable to recover it. 00:28:38.988 [2024-12-05 21:21:46.976060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.988 [2024-12-05 21:21:46.976076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.988 qpair failed and we were unable to recover it. 00:28:38.988 [2024-12-05 21:21:46.976152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.988 [2024-12-05 21:21:46.976165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.988 qpair failed and we were unable to recover it. 00:28:38.988 [2024-12-05 21:21:46.976231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.988 [2024-12-05 21:21:46.976244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.988 qpair failed and we were unable to recover it. 00:28:38.988 [2024-12-05 21:21:46.976417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.988 [2024-12-05 21:21:46.976435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.988 qpair failed and we were unable to recover it. 00:28:38.988 [2024-12-05 21:21:46.976519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.988 [2024-12-05 21:21:46.976540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.988 qpair failed and we were unable to recover it. 00:28:38.989 [2024-12-05 21:21:46.976671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.989 [2024-12-05 21:21:46.976683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.989 qpair failed and we were unable to recover it. 00:28:38.989 [2024-12-05 21:21:46.976887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.989 [2024-12-05 21:21:46.976903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.989 qpair failed and we were unable to recover it. 00:28:38.989 [2024-12-05 21:21:46.977048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.989 [2024-12-05 21:21:46.977065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.989 qpair failed and we were unable to recover it. 00:28:38.989 [2024-12-05 21:21:46.977214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.989 [2024-12-05 21:21:46.977229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.989 qpair failed and we were unable to recover it. 00:28:38.989 [2024-12-05 21:21:46.977390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.989 [2024-12-05 21:21:46.977423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.989 qpair failed and we were unable to recover it. 00:28:38.989 [2024-12-05 21:21:46.977583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.989 [2024-12-05 21:21:46.977598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.989 qpair failed and we were unable to recover it. 00:28:38.989 [2024-12-05 21:21:46.977842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.989 [2024-12-05 21:21:46.977859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.989 qpair failed and we were unable to recover it. 00:28:38.989 [2024-12-05 21:21:46.978011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.989 [2024-12-05 21:21:46.978025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.989 qpair failed and we were unable to recover it. 00:28:38.989 [2024-12-05 21:21:46.978106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.989 [2024-12-05 21:21:46.978118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.989 qpair failed and we were unable to recover it. 00:28:38.989 [2024-12-05 21:21:46.978194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.989 [2024-12-05 21:21:46.978206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.990 qpair failed and we were unable to recover it. 00:28:38.990 [2024-12-05 21:21:46.978297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.990 [2024-12-05 21:21:46.978311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.990 qpair failed and we were unable to recover it. 00:28:38.990 [2024-12-05 21:21:46.978390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.990 [2024-12-05 21:21:46.978406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.990 qpair failed and we were unable to recover it. 00:28:38.990 [2024-12-05 21:21:46.978627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.990 [2024-12-05 21:21:46.978643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.990 qpair failed and we were unable to recover it. 00:28:38.990 [2024-12-05 21:21:46.978792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.990 [2024-12-05 21:21:46.978807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.990 qpair failed and we were unable to recover it. 00:28:38.990 [2024-12-05 21:21:46.978897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.990 [2024-12-05 21:21:46.978912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.990 qpair failed and we were unable to recover it. 00:28:38.990 [2024-12-05 21:21:46.979043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.990 [2024-12-05 21:21:46.979058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.990 qpair failed and we were unable to recover it. 00:28:38.990 [2024-12-05 21:21:46.979216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.990 [2024-12-05 21:21:46.979232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.990 qpair failed and we were unable to recover it. 00:28:38.990 [2024-12-05 21:21:46.979305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.990 [2024-12-05 21:21:46.979319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.990 qpair failed and we were unable to recover it. 00:28:38.990 [2024-12-05 21:21:46.979402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.990 [2024-12-05 21:21:46.979416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.990 qpair failed and we were unable to recover it. 00:28:38.990 [2024-12-05 21:21:46.979560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.990 [2024-12-05 21:21:46.979573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.990 qpair failed and we were unable to recover it. 00:28:38.990 [2024-12-05 21:21:46.979646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.990 [2024-12-05 21:21:46.979659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.990 qpair failed and we were unable to recover it. 00:28:38.991 [2024-12-05 21:21:46.979792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.991 [2024-12-05 21:21:46.979808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.991 qpair failed and we were unable to recover it. 00:28:38.991 [2024-12-05 21:21:46.979898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.991 [2024-12-05 21:21:46.979913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.991 qpair failed and we were unable to recover it. 00:28:38.991 [2024-12-05 21:21:46.980063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.991 [2024-12-05 21:21:46.980077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:38.991 qpair failed and we were unable to recover it. 00:28:39.272 [2024-12-05 21:21:46.980176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.272 [2024-12-05 21:21:46.980191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.272 qpair failed and we were unable to recover it. 00:28:39.272 [2024-12-05 21:21:46.980278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.272 [2024-12-05 21:21:46.980291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.272 qpair failed and we were unable to recover it. 00:28:39.272 [2024-12-05 21:21:46.980465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.272 [2024-12-05 21:21:46.980483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.272 qpair failed and we were unable to recover it. 00:28:39.272 [2024-12-05 21:21:46.980640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.272 [2024-12-05 21:21:46.980657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.272 qpair failed and we were unable to recover it. 00:28:39.272 [2024-12-05 21:21:46.980819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.272 [2024-12-05 21:21:46.980840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.272 qpair failed and we were unable to recover it. 00:28:39.272 [2024-12-05 21:21:46.980942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.272 [2024-12-05 21:21:46.980958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.272 qpair failed and we were unable to recover it. 00:28:39.272 [2024-12-05 21:21:46.981143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.272 [2024-12-05 21:21:46.981160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.272 qpair failed and we were unable to recover it. 00:28:39.272 [2024-12-05 21:21:46.981319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.272 [2024-12-05 21:21:46.981336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.272 qpair failed and we were unable to recover it. 00:28:39.272 [2024-12-05 21:21:46.981411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.272 [2024-12-05 21:21:46.981429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.272 qpair failed and we were unable to recover it. 00:28:39.272 [2024-12-05 21:21:46.981533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.272 [2024-12-05 21:21:46.981546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.272 qpair failed and we were unable to recover it. 00:28:39.272 [2024-12-05 21:21:46.981687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.981700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.981856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.981873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.982040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.982055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.982198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.982213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.982414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.982431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.982587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.982604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.982688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.982703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.982908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.982923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.983063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.983076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.983147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.983160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.983306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.983325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.983406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.983422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.983514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.983528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.983666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.983682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.983814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.983829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.983920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.983947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.984102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.984119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.984219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.984233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.984393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.984408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.984499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.984513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.984576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.984589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.984770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.984789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.984891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.984906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.985010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.985024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.985103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.985118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.985271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.985286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.985424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.985442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.985574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.985590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.985676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.985688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.985770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.985782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.985924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.985937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.986008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.986021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.986091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.986105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.986277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.986294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.986386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.986402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.986545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.986561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.986709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.986724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.273 qpair failed and we were unable to recover it. 00:28:39.273 [2024-12-05 21:21:46.986817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.273 [2024-12-05 21:21:46.986832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.986911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.986929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.987017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.987032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.987236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.987253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.987410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.987424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.987572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.987589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.987746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.987762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.987861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.987875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.988011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.988027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.988162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.988177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.988323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.988339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.988353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:39.274 [2024-12-05 21:21:46.988489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.988506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.988650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.988664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.988798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.988811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.988962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.988981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.989159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.989176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.989279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.989293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.989441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.989457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.989552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.989567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.989797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.989817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.989978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.989992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.990075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.990088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.990182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.990194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.990260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.990272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.990486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.990509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.990684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.990700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.990770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.990785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.990956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.990972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.991125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.991143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.991398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.991416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.991588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.991602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.991679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.991693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.991781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.991796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.991933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.991962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.992056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.992071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.992162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.992184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.992346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.992363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.274 qpair failed and we were unable to recover it. 00:28:39.274 [2024-12-05 21:21:46.992565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.274 [2024-12-05 21:21:46.992584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.992676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.992693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.992841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.992856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.992940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.992954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.993142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.993188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.993389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.993426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.993542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.993576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.993752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.993786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.993998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.994032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.994267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.994301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.994432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.994466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.994657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.994690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.994982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.995015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e8000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.995140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.995167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.995338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.995360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.995472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.995490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.995596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.995612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.995766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.995787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.995996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.996014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.996166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.996182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.996331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.996347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.996457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.996475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.996687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.996705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.996806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.996822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.996961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.996977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.997181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.997200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.997433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.997452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.997664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.997677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.997758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.997772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.997908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.997926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.998002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.998016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.998094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.998116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.998252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.998267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.998347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.998361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.998474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.998489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.998573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.998588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.998669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.998683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.998908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.998926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.275 qpair failed and we were unable to recover it. 00:28:39.275 [2024-12-05 21:21:46.999084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.275 [2024-12-05 21:21:46.999098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:46.999241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:46.999254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:46.999429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:46.999449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:46.999533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:46.999547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:46.999748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:46.999765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:46.999854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:46.999871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.000025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.000040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.000276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.000294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.000394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.000411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.000623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.000638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.000708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.000719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.000859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.000874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.000979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.000995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.001094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.001110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.001289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.001306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.001458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.001476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.001634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.001651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.001799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.001814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.002051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.002067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.002291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.002309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.002557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.002579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.002720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.002735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.003005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.003023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.003178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.003194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.003329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.003343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.003536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.003551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.003702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.003715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.003917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.003935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.004140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.004156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.004337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.004353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.004519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.004535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.004680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.004695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.004790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.004804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.004889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.004903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.005123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.005136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.005303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.005319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.005495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.005514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.005725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.005741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.276 [2024-12-05 21:21:47.005844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.276 [2024-12-05 21:21:47.005859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.276 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.006030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.006046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.006278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.006297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.006453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.006468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.006564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.006578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.006655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.006669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.006803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.006821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.006977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.006993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.007205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.007221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.007361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.007409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.007502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.007516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.007688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.007707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.007783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.007795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.007878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.007890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.008058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.008072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.008138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.008153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.008286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.008304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.008456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.008473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.008567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.008582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.008738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.008754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.008833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.008847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.008984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.009006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.009106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.009121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.009264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.009278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.009511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.009526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.009693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.009711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.009850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.009866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.010041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.010058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.010160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.010175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.010329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.010352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.010448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.010466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.010568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.010584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.010739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.010759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.277 [2024-12-05 21:21:47.010969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.277 [2024-12-05 21:21:47.010987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.277 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.011261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.011279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.011442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.011458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.011633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.011647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.011725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.011739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.011963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.011981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.012143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.012158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.012297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.012312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.012457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.012474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.012625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.012642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.012724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.012737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.012830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.012843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.012925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.012935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.013066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.013080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.013163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.013176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.013317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.013333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.013485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.013503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.013710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.013726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.013886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.013900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.014054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.014070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.014140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.014154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.014378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.014392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.014490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.014505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.014656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.014673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.014932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.014950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.015225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.015243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.015508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.015530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.015641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.015655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.015807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.015823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.015974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.015990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.016168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.016186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.016391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.016408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.016622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.016638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.016739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.016752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.016949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.016966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.017170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.017186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.017343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.017356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.017505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.017523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.278 [2024-12-05 21:21:47.017614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.278 [2024-12-05 21:21:47.017629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.278 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.017717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.017731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.017838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.017859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.017965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.017981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.018142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.018158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.018243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.018255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.018349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.018363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.018516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.018534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.018612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.018624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.018852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.018865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.019011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.019041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.019189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.019204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.019354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.019386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.019525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.019540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.019621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.019636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.019717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.019730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.019865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.019881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.019979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.019993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.020087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.020102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.020163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.020174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.020331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.020344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.020421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.020434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.020592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.020611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.020791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.020806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.020959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.020977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.021122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.021138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.021296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.021314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.021400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.021417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.021514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.021530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.021681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.021694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.021756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.021767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.021844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.021856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.022075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.022093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.022178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.022192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.022272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.022289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.022361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.022380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.022464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.022479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.022653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.279 [2024-12-05 21:21:47.022670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.279 qpair failed and we were unable to recover it. 00:28:39.279 [2024-12-05 21:21:47.022759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.022773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.022923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.022938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.023072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.023086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.023165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.023176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.023265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.023279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.023438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.023460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.023619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.023635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.023719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.023739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.023900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.023916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.024057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.024074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.024245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.024262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.024339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.024351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.024441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.024454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.024604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.024618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.024718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.024733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.024799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.024812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.024966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.024982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.025137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.025152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.025377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.025395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.025547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.025570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.025662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.025679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.025749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.025766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.025907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.025924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.026064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.026082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.026347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.026365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.026522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.026537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.026627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.026641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.026789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.026804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.026954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.026970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.027062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.027078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.027223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.027238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.027395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.027413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.027565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.027580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.027663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.027678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.027827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.027845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.280 qpair failed and we were unable to recover it. 00:28:39.280 [2024-12-05 21:21:47.027913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.280 [2024-12-05 21:21:47.027925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.028015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.028027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.028250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.028267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.028435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.028455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.028690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.028706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.028883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.028898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.029090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.029108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.029212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.029227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.029378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.029393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.029472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.029484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.029635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.029651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.029746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.029775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.029852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.029867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.030007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.030024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.030229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.030246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.030332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.030348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.030455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.030470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.030553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.030569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.030651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.030666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.030751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.030774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.030923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.030935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.031028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.031041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.031105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.031120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.031176] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.281 [2024-12-05 21:21:47.031202] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.281 [2024-12-05 21:21:47.031210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.281 [2024-12-05 21:21:47.031219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.281 [2024-12-05 21:21:47.031224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.281 [2024-12-05 21:21:47.031325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.031344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.031590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.031607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.031712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.031727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.031807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.031822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.031911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.031926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.032103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.032119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.032268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.032284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.032500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.281 [2024-12-05 21:21:47.032515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.281 qpair failed and we were unable to recover it. 00:28:39.281 [2024-12-05 21:21:47.032651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.032666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.032819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.032732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:39.282 [2024-12-05 21:21:47.032836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.032929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.032943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 [2024-12-05 21:21:47.032857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.032964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:39.282 [2024-12-05 21:21:47.032965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:39.282 [2024-12-05 21:21:47.033109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.033130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.033268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.033283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.033356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.033374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.033506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.033521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.033599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.033614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.033713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.033727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.033917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.033934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.034026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.034040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.034194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.034208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.034291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.034305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.034378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.034409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.034623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.034640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.034723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.034740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.034817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.034832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.034918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.034932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.035001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.035013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.035083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.035095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.035188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.035203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.035289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.035304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.035541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.035560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.035701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.035725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.035864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.035879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.035978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.035994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.036087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.036102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.036191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.036206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.036302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.036318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.036392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.036408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.036561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.036579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.036783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.036800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.036875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.036889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.037030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.037047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.282 [2024-12-05 21:21:47.037301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.282 [2024-12-05 21:21:47.037315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.282 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.037400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.037415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.037577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.037595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.037749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.037765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.037907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.037923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.038068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.038082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.038294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.038311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.038537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.038557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.038721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.038735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.038886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.038906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.039068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.039093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.039247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.039266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.039406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.039423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.039513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.039528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.039638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.039654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.039791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.039806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.040034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.040060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.040308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.040322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.040482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.040499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.040588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.040608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.040703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.040718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.040897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.040913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.040996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.041012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.041172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.041187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.041273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.041288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.041465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.041481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.041566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.041581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.041719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.041736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.041822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.041839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.041920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.041936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.042006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.042022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.042105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.042122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.042226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.042244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.042332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.042350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.042490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.283 [2024-12-05 21:21:47.042506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.283 qpair failed and we were unable to recover it. 00:28:39.283 [2024-12-05 21:21:47.042576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.042588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.042726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.042740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.042819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.042833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.042932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.042948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.043083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.043099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.043335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.043351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.043473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.043490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.043581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.043597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.043754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.043769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.043835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.043849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.043934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.043949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.044163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.044180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.044268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.044280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.044444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.044461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.044547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.044567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.044650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.044665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.044828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.044845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.044948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.044963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.045055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.045078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.045160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.045174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.045402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.045422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.045651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.045669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.045756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.045769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.045848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.045862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.045993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.046006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.046084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.046097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.046255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.046272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.046377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.046394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.046491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.046508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.046737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.046754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.046825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.046838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.047094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.047113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.047254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.047270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.047349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.047362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.047470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.047484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.047582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.047597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.284 qpair failed and we were unable to recover it. 00:28:39.284 [2024-12-05 21:21:47.047746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.284 [2024-12-05 21:21:47.047766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.047921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.047938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.048022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.048037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.048199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.048216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.048300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.048317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.048671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.048697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.048851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.048867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.049075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.049091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.049195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.049209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.049374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.049391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.049480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.049497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.049706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.049723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.049875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.049890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.050029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.050044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.050192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.050207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.050458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.050478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.050580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.050594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.050727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.050740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.050884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.050904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.051080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.051098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.051251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.051266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.051354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.051374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.051526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.051542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.051627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.051642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.051821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.051837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.051935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.051950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.052153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.052170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.052235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.052247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.052322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.052335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.052424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.052439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.052592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.052610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.052767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.052782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.052935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.052951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.053089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.053104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.053184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.053199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.053376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.053393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.285 [2024-12-05 21:21:47.053547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.285 [2024-12-05 21:21:47.053564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.285 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.053673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.053686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.053766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.053778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.053839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.053853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.053982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.053996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.054178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.054196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.054330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.054345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.054445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.054461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.054668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.054685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.054833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.054847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.054983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.054999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.055083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.055098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.055287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.055300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.055381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.055412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.055551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.055566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.055642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.055658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.055828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.055843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.055944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.055959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.056107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.056122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.056305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.056320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.056464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.056480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.056553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.056565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.056634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.056654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.056792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.056807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.056900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.056914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.057054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.057070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.057214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.057232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.057396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.057416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.057501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.057515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.057721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.057738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.057833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.057848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.286 qpair failed and we were unable to recover it. 00:28:39.286 [2024-12-05 21:21:47.057990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.286 [2024-12-05 21:21:47.058005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.058152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.058168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.058324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.058341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.058496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.058516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.058605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.058622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.058702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.058719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.058807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.058823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.058976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.058993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.059233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.059249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.059441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.059456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.059543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.059559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.059707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.059724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.059815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.059830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.060052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.060068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.060296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.060312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.060465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.060482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.060626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.060640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.060880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.060896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.061059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.061072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.061222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.061240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.061446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.061465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.061568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.061583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.061651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.061665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.061807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.061822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.062025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.062040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.062109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.062122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.062257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.062273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.062357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.062374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.062526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.062539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.062703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.062719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.062906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.062923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.063070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.063089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.063252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.063268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.063358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.063377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.063535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.063548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.063749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.063767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.287 qpair failed and we were unable to recover it. 00:28:39.287 [2024-12-05 21:21:47.063861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.287 [2024-12-05 21:21:47.063874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.064146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.064162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.064401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.064423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.064631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.064647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.064796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.064811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.064947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.064963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.065062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.065075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.065297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.065313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.065467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.065482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.065726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.065743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.065948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.065967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.066136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.066151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.066299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.066314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.066457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.066475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.066625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.066639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.066782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.066796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.066984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.066999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.067085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.067097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.067399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.067420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.067578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.067594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.067798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.067814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.068024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.068040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.068257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.068276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.068420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.068437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.068673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.068687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.068886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.068903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.069076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.069092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.069172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.069195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.069345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.069361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.069610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.069629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.069764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.069779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.069935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.069951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.070096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.070108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.070339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.070356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.070577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.070595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.070769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.070786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.070995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.071010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.071149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.288 [2024-12-05 21:21:47.071163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.288 qpair failed and we were unable to recover it. 00:28:39.288 [2024-12-05 21:21:47.071393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.071429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.071642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.071656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.071811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.071826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.072072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.072090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.072306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.072323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.072540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.072557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.072693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.072707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.072872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.072888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.073103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.073117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.073260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.073275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.073498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.073519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.073729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.073745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.073960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.073976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.074181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.074197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.074427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.074448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.074711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.074728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.074854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.074871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.075107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.075124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.075371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.075386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.075619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.075638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.075890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.075907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.075994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.076008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.076220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.076235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.076390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.076406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.076619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.076639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.076813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.076827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.077048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.077064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.077315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.077334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.077563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.077581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.077727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.077742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.077886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.077900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.078055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.078070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.078275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.078291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.078509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.078525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.078675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.289 [2024-12-05 21:21:47.078693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.289 qpair failed and we were unable to recover it. 00:28:39.289 [2024-12-05 21:21:47.078946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.078964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.079204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.079222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.079446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.079463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.079679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.079696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.079859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.079873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.080056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.080072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.080305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.080325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.080536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.080554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.080785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.080801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.081009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.081025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.081106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.081118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.081304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.081320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.081545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.081560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.081757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.081775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.082007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.082024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.082272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.082289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.082452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.082468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.082630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.082646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.082896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.082911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.083164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.083181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.083410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.083428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.083661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.083677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.083814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.083828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.084061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.084078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.084287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.084302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.084452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.084466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.084550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.084562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.084697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.084713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.084920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.084937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.085030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.085048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.085205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.085221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.085449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.085469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.085708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.085726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.085953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.085966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.086100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.086114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.086207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.086222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.086459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.086477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.086679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.290 [2024-12-05 21:21:47.086695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.290 qpair failed and we were unable to recover it. 00:28:39.290 [2024-12-05 21:21:47.086876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.086891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.087125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.087143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.087297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.087312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.087461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.087475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.087627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.087642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.087781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.087800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.088056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.088074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.088226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.088243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.088344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.088358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.088508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.088523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.088662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.088677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.088923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.088941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.089171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.089186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.089391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.089411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.089581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.089596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.089824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.089841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.090014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.090029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.090254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.090271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.090425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.090445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.090663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.090681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.090909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.090928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.091087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.091103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.091249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.091263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.091415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.091432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.091660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.091679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.091912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.091929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.092084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.092101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.092365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.092401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.092629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.092646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.092848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.092862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.093095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.093112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.093345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.093366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.093596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.093613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.291 [2024-12-05 21:21:47.093819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.291 [2024-12-05 21:21:47.093834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.291 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.093982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.093996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.094133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.094148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.094353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.094373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.094592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.094610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.094795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.094811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.094899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.094912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.095116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.095131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.095357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.095380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.095587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.095603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.095773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.095787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.096004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.096021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.096284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.096301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.096526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.096545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.096775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.096792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.096964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.096981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.097138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.097153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.097406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.097421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.097508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.097522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.097725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.097742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.097966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.097982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.098214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.098229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.098435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.098451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.098600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.098615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.098833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.098846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.099072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.099097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.099197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.099213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.099461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.099479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.099630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.099645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.099785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.099801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.100027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.100044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.100275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.100290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.100435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.100452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.100600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.100617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.100792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.100809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.101039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.101055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.101260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.101276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.101458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.101477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.292 [2024-12-05 21:21:47.101683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.292 [2024-12-05 21:21:47.101703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.292 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.101879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.101893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.102094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.102113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.102210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.102225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.102456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.102473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.102710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.102726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.102956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.102974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.103131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.103146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.103295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.103308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.103547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.103564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.103703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.103720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.103821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.103835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.103992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.104007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.104176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.104192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.104296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.104311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.104570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.104590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.104828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.104844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.104982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.104995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.105219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.105235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.105394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.105411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.105638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.105653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.105876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.105892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.106094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.106110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.106248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.106266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.106442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.106459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.106705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.106721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.106972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.106989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.107197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.107212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.107415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.107435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.107596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.107611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.107862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.107880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.108126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.108143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.108300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.108315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.108545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.108559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.108729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.108745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.108900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.108916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.109080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.109095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.109295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.109310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.293 [2024-12-05 21:21:47.109528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.293 [2024-12-05 21:21:47.109547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.293 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.109740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.109756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.109899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.109915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.110066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.110080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.110390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.110411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.110681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.110698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.110873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.110890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.111098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.111114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.111266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.111279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.111473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.111490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.111720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.111740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.111824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.111838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.111939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.111954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.112202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.112222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.112325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.112338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.112595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.112615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.112793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.112809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.113045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.113062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.113161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.113175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.113323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.113339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.113580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.113599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.113737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.113753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.113996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.114014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.114173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.114189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.114331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.114345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.114548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.114564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.114792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.114822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.115058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.115075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.115306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.115321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.115541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.115559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.115758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.115775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.116002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.116018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.116196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.116214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.116353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.116383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.116552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.116569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.116676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.116691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.116764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.116778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.116869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.116881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.294 [2024-12-05 21:21:47.117081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.294 [2024-12-05 21:21:47.117098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.294 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.117198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.117214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.117362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.117382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.117516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.117532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.117627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.117646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.117818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.117834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.117992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.118009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.118164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.118179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.118254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.118266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.118445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.118461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.118603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.118618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.118857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.118872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.119032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.119047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.119156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.119172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.119247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.119261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.119414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.119431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.119611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.119628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.119724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.119740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.120000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.120021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.120177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.120194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.120303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.120316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.120400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.120413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.120616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.120632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.120805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.120823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.121078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.121095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.121343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.121360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.121586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.121605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.121789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.121807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.122063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.122080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.122256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.122273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.122514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.122530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.122735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.122753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.122910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.122927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.123019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.123032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.123180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.123196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.123356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.123378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.123526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.123542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.123745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.123763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.295 qpair failed and we were unable to recover it. 00:28:39.295 [2024-12-05 21:21:47.123907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.295 [2024-12-05 21:21:47.123921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.123994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.124006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.124169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.124185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.124342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.124359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.124517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.124534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.124684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.124701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.124864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.124884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.125035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.125054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.125228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.125245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.125332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.125343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.125575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.125593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.125847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.125867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.126033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.126049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.126230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.126247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.126471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.126492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.126722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.126739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.126993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.127009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.127213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.127233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.127383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.127399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.127651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.127668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.127843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.127860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.128043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.128061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.128231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.128247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.128452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.128467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.128651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.128669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.128887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.128904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.129007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.129021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.129090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.129104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.129262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.129279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.129491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.129511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.129599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.129613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.129749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.129763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.129840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.129852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.130029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.130045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.296 [2024-12-05 21:21:47.130138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.296 [2024-12-05 21:21:47.130153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.296 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.130322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.130338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.130420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.130436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.130582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.130598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.130802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.130820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.130906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.130920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.131194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.131213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.131376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.131390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.131623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.131643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.131875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.131893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.132131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.132147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.132294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.132311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.132514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.132538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.132692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.132706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.132845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.132859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.133034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.133053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.133304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.133321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.133539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.133557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.133764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.133783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.133954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.133971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.134191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.134205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.134435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.134455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.134720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.134737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.134991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.135007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.135167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.135185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.135355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.135376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.135559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.135573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.135727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.135742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.135848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.135865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.136111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.136128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.297 [2024-12-05 21:21:47.136345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.136363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.136536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.136554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:39.297 [2024-12-05 21:21:47.136795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.136817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:39.297 [2024-12-05 21:21:47.137073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.137092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:39.297 [2024-12-05 21:21:47.137293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.137311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.137538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.137556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:39.297 [2024-12-05 21:21:47.137702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.297 [2024-12-05 21:21:47.137717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.297 qpair failed and we were unable to recover it. 00:28:39.297 [2024-12-05 21:21:47.137948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.137967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.138238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.138255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.138468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.138488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.138724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.138744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.138956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.138972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.139225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.139243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.139386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.139404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.139513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.139529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.139764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.139781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.139960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.139976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.140071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.140086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.140236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.140252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.140479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.140495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.140694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.140716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.140932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.140949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.141109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.141124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.141264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.141281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.141376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.141392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.141522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.141540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.141679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.141696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.141955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.141972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.142126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.142143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.142296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.142314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.142469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.142486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.142636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.142652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.142910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.142927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.143063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.143080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.143317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.143334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.143485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.143500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.143732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.143752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.143933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.143949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.144123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.144140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.144213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.144227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.144385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.144401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.144486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.144501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.144659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.144675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.144832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.144846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.298 [2024-12-05 21:21:47.144983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.298 [2024-12-05 21:21:47.144997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.298 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.145202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.145220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.145403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.145420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.145583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.145599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.145698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.145714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.145853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.145871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.146024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.146041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.146120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.146135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.146217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.146229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.146326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.146338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.146482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.146498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.146586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.146600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.146745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.146763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.146857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.146875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.146962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.146976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.147043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.147057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.147131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.147149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.147227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.147241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.147312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.147327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.147536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.147558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.147718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.147734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.147824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.147835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.147930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.147944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.148009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.148022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.148158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.148178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.148251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.148267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.148350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.148364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.148510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.148527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.148618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.148632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.148725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.148739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.148810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.148824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.148908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.148922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.148999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.149013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.149161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.149177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.149384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.149401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.149474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.149486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.149620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.149634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.149701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.149715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.299 [2024-12-05 21:21:47.149780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.299 [2024-12-05 21:21:47.149794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.299 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.149868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.149882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.150015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.150033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.150119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.150133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.150289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.150307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.150391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.150406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.150562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.150579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.150670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.150686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.150772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.150787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.150867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.150879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.150968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.150979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.151111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.151124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.151188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.151200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.151472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.151492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.151653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.151670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.151754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.151768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.152017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.152037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.152214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.152232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.152443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.152467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.152621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.152639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.152828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.152846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.153035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.153050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.153203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.153217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.153392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.153412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.153561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.153577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.153804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.153821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.153914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.153929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.154095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.154110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.154351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.154378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.154496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.154512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.154598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.154610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.154775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.154792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.154956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.154974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.155103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.155118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.155285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.300 [2024-12-05 21:21:47.155301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.300 qpair failed and we were unable to recover it. 00:28:39.300 [2024-12-05 21:21:47.155508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.155527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.155680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.155699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.155845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.155861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.156084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.156099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.156268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.156286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.156498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.156516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.156673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.156689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.156794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.156809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.157020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.157039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.157290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.157307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.157505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.157521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.157604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.157618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.157772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.157789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.158005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.158023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.158177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.158193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.158293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.158308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.158407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.158423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.158600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.158618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.158705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.158720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.158813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.158829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.158932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.158945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.159096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.159110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.159310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.159329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.159481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.159503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.159651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.159667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.159871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.159888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.160138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.160157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.160281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.160295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.160447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.160462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.160569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.160582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.160731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.160749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.160851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.160867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.161139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.161156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.161394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.161412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.161560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.161577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.161743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.161759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.161938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.161952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.162112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.301 [2024-12-05 21:21:47.162132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.301 qpair failed and we were unable to recover it. 00:28:39.301 [2024-12-05 21:21:47.162287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.162305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.162477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.162494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.162706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.162723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.162864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.162880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.162992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.163008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.163205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.163222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.163395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.163412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.163589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.163606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.163722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.163738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.163902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.163918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.164089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.164106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.164354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.164380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.164537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.164554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.164663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.164676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.164828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.164841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.165085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.165104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.165366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.165406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.165562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.165580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.165656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.165670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.165812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.165829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.165928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.165944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.166117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.166133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.166280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.166293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.166436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.166453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.166611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.166629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.166808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.166827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.166937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.166953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.167090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.167105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.167230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.167246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.167384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.167403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.167566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.167586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.167795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.167814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.167993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.168010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.168092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.168108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.168265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.168280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.168492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.168511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.168602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.168618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.168872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.302 [2024-12-05 21:21:47.168891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.302 qpair failed and we were unable to recover it. 00:28:39.302 [2024-12-05 21:21:47.169056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.169073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.169180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.169196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.169364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.169396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.169496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.169511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.169616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.169631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.169786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.169800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.169910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.169925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.170091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.170108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.170268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.170286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.170391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.170407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.170612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.170627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.170734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.170749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.170903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.170920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.171154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.171171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.171429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.171473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.171625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.171658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.171789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.171822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.172088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.172120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.303 [2024-12-05 21:21:47.172389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.172425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.172627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:39.303 [2024-12-05 21:21:47.172661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.172802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.172828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.172936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.172953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.303 [2024-12-05 21:21:47.173116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.173133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:39.303 [2024-12-05 21:21:47.173287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.173306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.173475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.173493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.173664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.173687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.173849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.173865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.174093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.174108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.174273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.174290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.174443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.174463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.174569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.174585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.303 qpair failed and we were unable to recover it. 00:28:39.303 [2024-12-05 21:21:47.174820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.303 [2024-12-05 21:21:47.174836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.174936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.174951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.175133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.175152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.175312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.175328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.175464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.175478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.175584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.175597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.175694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.175708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.175797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.175812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.175901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.175916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.176170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.176186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.176417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.176436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.176590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.176607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.176766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.176782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.176998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.177011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.177217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.177234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.177394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.177413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.177644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.177660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.177812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.177827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.178085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.178105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.178262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.178276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.178437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.178451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.178598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.178615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.178775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.178790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.178928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.178943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.179195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.179211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.179445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.179466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.179675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.179690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.179838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.179853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.180065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.180083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.180321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.180337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.180598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.180615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.180782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.180799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.180950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.180965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.181191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.181206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.181397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.181417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.181637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.181666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.181751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.181765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.181971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.181986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.304 qpair failed and we were unable to recover it. 00:28:39.304 [2024-12-05 21:21:47.182164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.304 [2024-12-05 21:21:47.182182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.182340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.182360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.182511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.182528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.182754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.182771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.183008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.183026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.183292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.183308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.183532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.183552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.183760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.183776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.183926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.183942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.184199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.184219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.184511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.184527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.184683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.184700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.184787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.184802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.185018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.185035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.185193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.185208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.185466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.185488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.185733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.185749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.185837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.185848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.186023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.186037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.186248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.186266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.186490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.186508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.186732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.186748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.186964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.186983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.187138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.187155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.187288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.187300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.187512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.187531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.187765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.187782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.187942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.187958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.188187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.188204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.188364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.188386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.188563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.188579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.188808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.188824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.189051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.189071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.189277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.189293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.305 [2024-12-05 21:21:47.189387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.305 [2024-12-05 21:21:47.189403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.305 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.189552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.189568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.189772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.189790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.189946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.189962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.190173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.190186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.190338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.190354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.190455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.190469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.190695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.190711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.190873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.190888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.191064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.191082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.191231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.191247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.191402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.191417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.191619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.191634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.191785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.191801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.192010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.192025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.192265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.192282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.192535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.192557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.192783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.192800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.192948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.192961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.193171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.193189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.193408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.193427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.193643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.193658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.193834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.193849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.194076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.194096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.194244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.194258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.194406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.194420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.194518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.194535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.194672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.194689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.194866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.194881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.195059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.195079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.195306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.195325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.195519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.195536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.195812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.195827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.196033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.196052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.196274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.196289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.196494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.196524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.196594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.196607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.196842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.196861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.306 qpair failed and we were unable to recover it. 00:28:39.306 [2024-12-05 21:21:47.197115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.306 [2024-12-05 21:21:47.197134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.197363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.197386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.197528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.197544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.197650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.197662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.197821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.197834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.197982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.197997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.198154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.198171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.198253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.198266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.198363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.198381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.198531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.198547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.198703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.198718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.198944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.198963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.199143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.199156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.199256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.199267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.199468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.199487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.199720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.199737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.199835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.199850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.199999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.200014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.200157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.200174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.200312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.200327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.200553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.200568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.200746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.200773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.201011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.201028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.201303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.201320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.201548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.201570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.201672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.201686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.201913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.201929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.202094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.202107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.202192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.202205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.202430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.202449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.202599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.202614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.202717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.202736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.202889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.202904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.203049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.203065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.203273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.203289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.203375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.203388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.203590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.203605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.203774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.203792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.307 [2024-12-05 21:21:47.204048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.307 [2024-12-05 21:21:47.204065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.307 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.204333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.204350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.204508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.204525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.204696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.204712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 Malloc0 00:28:39.308 [2024-12-05 21:21:47.204949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.204964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.205054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.205067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.205323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.205340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.205510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.205527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.308 [2024-12-05 21:21:47.205706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.205723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.205968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.205987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:39.308 [2024-12-05 21:21:47.206169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.206186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.308 [2024-12-05 21:21:47.206357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.206378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.206609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.206628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:39.308 [2024-12-05 21:21:47.206884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.206902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.207059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.207075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.207225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.207241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.207383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.207401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.207552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.207568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.207885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.207920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.208094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.208127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.208269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.208301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.208492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.208516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.208676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.208694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.208929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.208946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.209199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.209214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.209321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.209338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.209490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.209508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.209739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.209756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.209967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.209983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.210093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.210109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.210284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.210301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.210528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.210547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.210720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.210738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.210935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.210953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.211183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.211199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.211427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.211448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.308 [2024-12-05 21:21:47.211685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.308 [2024-12-05 21:21:47.211706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.308 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.211920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.211938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.212081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.212099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.212329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.212347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.212453] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.309 [2024-12-05 21:21:47.212507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.212521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.212739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.212756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.212988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.213007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.213158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.213175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.213381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.213400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.213638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.213657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.213917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.213933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.214123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.214139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.214393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.214412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.214622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.214641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.214805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.214821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.214962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.214980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.215242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.215260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.215418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.215434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.215609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.215627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.215801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.215817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.216088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.216104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.216190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.216204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.216426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.216447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.216524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.216538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.216645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.216660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.216791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.216805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.217007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.217024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.217179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.217197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.217349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.217364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.217540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.217557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.217703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.217719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.217895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.217912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.218052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.218068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.218271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.218287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.309 [2024-12-05 21:21:47.218423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.309 [2024-12-05 21:21:47.218437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.309 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.218575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.218595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.218771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.218789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.218951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.218966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.219105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.219121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.219353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.219381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.219611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.219629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.219860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.219874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.220072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.220090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.220375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.220396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.220573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.220591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.220845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.220866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.221142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.221159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.221385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.221442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6afbe0 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:39.310 [2024-12-05 21:21:47.221759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.221794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.310 [2024-12-05 21:21:47.221939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.221972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9dc000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:39.310 [2024-12-05 21:21:47.222246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.222270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.222377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.222393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.222540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.222554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.222705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.222722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.222995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.223013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.223175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.223192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.223350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.223371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.223515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.223534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.223742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.223759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.223919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.223933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.224081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.224096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.224299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.224317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.224510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.224528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.224703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.224720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.224954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.224974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.225091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.225108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.225326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.225341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.225448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.225464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.225696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.225716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.225874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.225889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-12-05 21:21:47.225984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.310 [2024-12-05 21:21:47.226001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.226212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.226229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.226385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.226405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.226569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.226593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.226758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.226775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.227001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.227019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.227287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.227307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.227526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.227543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.227707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.227726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.227982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.228000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.228184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.228201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.228457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.228480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.228573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.228588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.228738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.228752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.228885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.228898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.229075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.229092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.229165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.229180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b9 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.311 0 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.229363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.229384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:39.311 [2024-12-05 21:21:47.229650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.229669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.229760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.229774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.311 [2024-12-05 21:21:47.230001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.230022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:39.311 [2024-12-05 21:21:47.230254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.230270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.230519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.230539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.230693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.230711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.230945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.230961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.231113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.231128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.231360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.231384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.231579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.231594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.231822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.231842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.232008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.232027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.232197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.232213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.232366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.232387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.232593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.232609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.232788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.232806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.233033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.233050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.233280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.233296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.233549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.311 [2024-12-05 21:21:47.233570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-12-05 21:21:47.233722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.233738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.233944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.233961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.234208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.234227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.234394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.234413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.234486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.234498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.234648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.234662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.234805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.234822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.234974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.234991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.235086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.235100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.235278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.235294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.235495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.235512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.235596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.235609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.235818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.235837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.236095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.236110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.236290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.236307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.236537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.236556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.236739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.236756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.236986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.237003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.237169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.237187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.312 [2024-12-05 21:21:47.237392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.237409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.237566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.237580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:39.312 [2024-12-05 21:21:47.237737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.237755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.237912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.237929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.238137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.238154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:39.312 [2024-12-05 21:21:47.238246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.238262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.238414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.238433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.238641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.238659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.238890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.238906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.239130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.239148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.239327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.239349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.239500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.239517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.239725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.239742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.239901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.239917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.240099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.240115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.240352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.240371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.240577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.312 [2024-12-05 21:21:47.240595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9e0000b90 with addr=10.0.0.2, port=4420 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-12-05 21:21:47.240682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.313 [2024-12-05 21:21:47.243139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.313 [2024-12-05 21:21:47.243232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.313 [2024-12-05 21:21:47.243257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.313 [2024-12-05 21:21:47.243270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.313 [2024-12-05 21:21:47.243281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.313 [2024-12-05 21:21:47.243309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.313 qpair failed and we were unable to recover it. 00:28:39.313 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.313 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:39.313 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.313 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:39.313 [2024-12-05 21:21:47.253031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.313 [2024-12-05 21:21:47.253098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.313 [2024-12-05 21:21:47.253121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.313 [2024-12-05 21:21:47.253137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.313 [2024-12-05 21:21:47.253147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.313 [2024-12-05 21:21:47.253171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.313 qpair failed and we were unable to recover it. 00:28:39.313 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.313 21:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1470180 00:28:39.313 [2024-12-05 21:21:47.263042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.313 [2024-12-05 21:21:47.263107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.313 [2024-12-05 21:21:47.263130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.313 [2024-12-05 21:21:47.263141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.313 [2024-12-05 21:21:47.263150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.313 [2024-12-05 21:21:47.263174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.313 qpair failed and we were unable to recover it. 00:28:39.313 [2024-12-05 21:21:47.273066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.313 [2024-12-05 21:21:47.273142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.313 [2024-12-05 21:21:47.273164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.313 [2024-12-05 21:21:47.273175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.313 [2024-12-05 21:21:47.273184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.313 [2024-12-05 21:21:47.273207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.313 qpair failed and we were unable to recover it. 00:28:39.313 [2024-12-05 21:21:47.283013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.313 [2024-12-05 21:21:47.283081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.313 [2024-12-05 21:21:47.283104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.313 [2024-12-05 21:21:47.283115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.313 [2024-12-05 21:21:47.283125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.313 [2024-12-05 21:21:47.283148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.313 qpair failed and we were unable to recover it. 00:28:39.313 [2024-12-05 21:21:47.292998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.313 [2024-12-05 21:21:47.293094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.313 [2024-12-05 21:21:47.293117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.313 [2024-12-05 21:21:47.293132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.313 [2024-12-05 21:21:47.293141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.313 [2024-12-05 21:21:47.293164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.313 qpair failed and we were unable to recover it. 00:28:39.313 [2024-12-05 21:21:47.303067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.313 [2024-12-05 21:21:47.303132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.313 [2024-12-05 21:21:47.303155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.313 [2024-12-05 21:21:47.303165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.313 [2024-12-05 21:21:47.303174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.313 [2024-12-05 21:21:47.303196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.313 qpair failed and we were unable to recover it. 00:28:39.313 [2024-12-05 21:21:47.313131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.313 [2024-12-05 21:21:47.313204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.313 [2024-12-05 21:21:47.313230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.313 [2024-12-05 21:21:47.313244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.313 [2024-12-05 21:21:47.313255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.313 [2024-12-05 21:21:47.313281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.313 qpair failed and we were unable to recover it. 00:28:39.313 [2024-12-05 21:21:47.323213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.313 [2024-12-05 21:21:47.323319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.313 [2024-12-05 21:21:47.323341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.313 [2024-12-05 21:21:47.323351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.313 [2024-12-05 21:21:47.323360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.313 [2024-12-05 21:21:47.323388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.313 qpair failed and we were unable to recover it. 00:28:39.313 [2024-12-05 21:21:47.333169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.313 [2024-12-05 21:21:47.333225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.313 [2024-12-05 21:21:47.333246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.313 [2024-12-05 21:21:47.333255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.313 [2024-12-05 21:21:47.333264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.313 [2024-12-05 21:21:47.333290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.313 qpair failed and we were unable to recover it. 00:28:39.313 [2024-12-05 21:21:47.343188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.313 [2024-12-05 21:21:47.343247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.313 [2024-12-05 21:21:47.343270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.313 [2024-12-05 21:21:47.343281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.313 [2024-12-05 21:21:47.343290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.313 [2024-12-05 21:21:47.343314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.313 qpair failed and we were unable to recover it. 00:28:39.313 [2024-12-05 21:21:47.353152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.313 [2024-12-05 21:21:47.353214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.313 [2024-12-05 21:21:47.353236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.313 [2024-12-05 21:21:47.353246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.313 [2024-12-05 21:21:47.353255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.313 [2024-12-05 21:21:47.353278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.313 qpair failed and we were unable to recover it. 00:28:39.573 [2024-12-05 21:21:47.363256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.573 [2024-12-05 21:21:47.363321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-12-05 21:21:47.363343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-12-05 21:21:47.363354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-12-05 21:21:47.363363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.574 [2024-12-05 21:21:47.363393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-12-05 21:21:47.373253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-12-05 21:21:47.373319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-12-05 21:21:47.373342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-12-05 21:21:47.373353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-12-05 21:21:47.373362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.574 [2024-12-05 21:21:47.373393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-12-05 21:21:47.383281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-12-05 21:21:47.383347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-12-05 21:21:47.383378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-12-05 21:21:47.383390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-12-05 21:21:47.383400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.574 [2024-12-05 21:21:47.383424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-12-05 21:21:47.393271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-12-05 21:21:47.393331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-12-05 21:21:47.393353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-12-05 21:21:47.393364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-12-05 21:21:47.393379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.574 [2024-12-05 21:21:47.393403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-12-05 21:21:47.403343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-12-05 21:21:47.403409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-12-05 21:21:47.403431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-12-05 21:21:47.403442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-12-05 21:21:47.403452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.574 [2024-12-05 21:21:47.403475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-12-05 21:21:47.413388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-12-05 21:21:47.413451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-12-05 21:21:47.413474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-12-05 21:21:47.413485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-12-05 21:21:47.413495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.574 [2024-12-05 21:21:47.413518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-12-05 21:21:47.423400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-12-05 21:21:47.423464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-12-05 21:21:47.423492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-12-05 21:21:47.423504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-12-05 21:21:47.423512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.574 [2024-12-05 21:21:47.423535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-12-05 21:21:47.433462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-12-05 21:21:47.433527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-12-05 21:21:47.433549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-12-05 21:21:47.433561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-12-05 21:21:47.433570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.574 [2024-12-05 21:21:47.433593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-12-05 21:21:47.443512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-12-05 21:21:47.443578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-12-05 21:21:47.443600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-12-05 21:21:47.443610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-12-05 21:21:47.443619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.574 [2024-12-05 21:21:47.443641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-12-05 21:21:47.453541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-12-05 21:21:47.453615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-12-05 21:21:47.453636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-12-05 21:21:47.453647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-12-05 21:21:47.453656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.574 [2024-12-05 21:21:47.453678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-12-05 21:21:47.463509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-12-05 21:21:47.463564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-12-05 21:21:47.463585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-12-05 21:21:47.463595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-12-05 21:21:47.463603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.574 [2024-12-05 21:21:47.463627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-12-05 21:21:47.473526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-12-05 21:21:47.473588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-12-05 21:21:47.473609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-12-05 21:21:47.473619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-12-05 21:21:47.473627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.574 [2024-12-05 21:21:47.473648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-12-05 21:21:47.483550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-12-05 21:21:47.483626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-12-05 21:21:47.483648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-12-05 21:21:47.483660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-12-05 21:21:47.483669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.574 [2024-12-05 21:21:47.483694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-12-05 21:21:47.493560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-12-05 21:21:47.493625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-12-05 21:21:47.493647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-12-05 21:21:47.493658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-12-05 21:21:47.493667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.575 [2024-12-05 21:21:47.493689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-12-05 21:21:47.503632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-12-05 21:21:47.503691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-12-05 21:21:47.503715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-12-05 21:21:47.503726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-12-05 21:21:47.503735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.575 [2024-12-05 21:21:47.503758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-12-05 21:21:47.513624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-12-05 21:21:47.513688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-12-05 21:21:47.513710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-12-05 21:21:47.513722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-12-05 21:21:47.513731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.575 [2024-12-05 21:21:47.513755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-12-05 21:21:47.523755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-12-05 21:21:47.523815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-12-05 21:21:47.523837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-12-05 21:21:47.523848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-12-05 21:21:47.523858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.575 [2024-12-05 21:21:47.523881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-12-05 21:21:47.533673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-12-05 21:21:47.533730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-12-05 21:21:47.533751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-12-05 21:21:47.533761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-12-05 21:21:47.533770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.575 [2024-12-05 21:21:47.533793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-12-05 21:21:47.543713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-12-05 21:21:47.543771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-12-05 21:21:47.543794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-12-05 21:21:47.543805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-12-05 21:21:47.543814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.575 [2024-12-05 21:21:47.543838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-12-05 21:21:47.553762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-12-05 21:21:47.553823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-12-05 21:21:47.553848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-12-05 21:21:47.553859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-12-05 21:21:47.553868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.575 [2024-12-05 21:21:47.553891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-12-05 21:21:47.563808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-12-05 21:21:47.563888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-12-05 21:21:47.563910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-12-05 21:21:47.563920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-12-05 21:21:47.563928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.575 [2024-12-05 21:21:47.563949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-12-05 21:21:47.573861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-12-05 21:21:47.573924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-12-05 21:21:47.573946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-12-05 21:21:47.573957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-12-05 21:21:47.573967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.575 [2024-12-05 21:21:47.573990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-12-05 21:21:47.583858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-12-05 21:21:47.583921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-12-05 21:21:47.583943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-12-05 21:21:47.583954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-12-05 21:21:47.583964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.575 [2024-12-05 21:21:47.583987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-12-05 21:21:47.593860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-12-05 21:21:47.593923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-12-05 21:21:47.593944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-12-05 21:21:47.593954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-12-05 21:21:47.593968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.575 [2024-12-05 21:21:47.593991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-12-05 21:21:47.603930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-12-05 21:21:47.603990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-12-05 21:21:47.604010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-12-05 21:21:47.604020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-12-05 21:21:47.604027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.575 [2024-12-05 21:21:47.604047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-12-05 21:21:47.613955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-12-05 21:21:47.614014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-12-05 21:21:47.614037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-12-05 21:21:47.614049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-12-05 21:21:47.614058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.575 [2024-12-05 21:21:47.614081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-12-05 21:21:47.623967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.576 [2024-12-05 21:21:47.624042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.576 [2024-12-05 21:21:47.624065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.576 [2024-12-05 21:21:47.624076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.576 [2024-12-05 21:21:47.624085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.576 [2024-12-05 21:21:47.624107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.576 qpair failed and we were unable to recover it. 00:28:39.576 [2024-12-05 21:21:47.634003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.576 [2024-12-05 21:21:47.634067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.576 [2024-12-05 21:21:47.634090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.576 [2024-12-05 21:21:47.634102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.576 [2024-12-05 21:21:47.634111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.576 [2024-12-05 21:21:47.634135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.576 qpair failed and we were unable to recover it. 00:28:39.576 [2024-12-05 21:21:47.644045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.576 [2024-12-05 21:21:47.644110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.576 [2024-12-05 21:21:47.644132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.576 [2024-12-05 21:21:47.644144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.576 [2024-12-05 21:21:47.644153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.576 [2024-12-05 21:21:47.644177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.576 qpair failed and we were unable to recover it. 00:28:39.576 [2024-12-05 21:21:47.654033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.576 [2024-12-05 21:21:47.654095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.576 [2024-12-05 21:21:47.654118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.576 [2024-12-05 21:21:47.654129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.576 [2024-12-05 21:21:47.654139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.576 [2024-12-05 21:21:47.654162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.576 qpair failed and we were unable to recover it. 00:28:39.576 [2024-12-05 21:21:47.664128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.576 [2024-12-05 21:21:47.664190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.576 [2024-12-05 21:21:47.664213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.576 [2024-12-05 21:21:47.664224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.576 [2024-12-05 21:21:47.664234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.576 [2024-12-05 21:21:47.664257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.576 qpair failed and we were unable to recover it. 00:28:39.576 [2024-12-05 21:21:47.674072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.576 [2024-12-05 21:21:47.674143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.576 [2024-12-05 21:21:47.674165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.576 [2024-12-05 21:21:47.674177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.576 [2024-12-05 21:21:47.674186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.576 [2024-12-05 21:21:47.674209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.576 qpair failed and we were unable to recover it. 00:28:39.836 [2024-12-05 21:21:47.684094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.836 [2024-12-05 21:21:47.684149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.836 [2024-12-05 21:21:47.684176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.836 [2024-12-05 21:21:47.684187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.836 [2024-12-05 21:21:47.684197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.836 [2024-12-05 21:21:47.684219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.836 qpair failed and we were unable to recover it. 00:28:39.836 [2024-12-05 21:21:47.694212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.836 [2024-12-05 21:21:47.694272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.836 [2024-12-05 21:21:47.694294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.836 [2024-12-05 21:21:47.694305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.836 [2024-12-05 21:21:47.694314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.836 [2024-12-05 21:21:47.694336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.836 qpair failed and we were unable to recover it. 00:28:39.836 [2024-12-05 21:21:47.704196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.836 [2024-12-05 21:21:47.704257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.836 [2024-12-05 21:21:47.704280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.836 [2024-12-05 21:21:47.704292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.836 [2024-12-05 21:21:47.704300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.836 [2024-12-05 21:21:47.704323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.836 qpair failed and we were unable to recover it. 00:28:39.836 [2024-12-05 21:21:47.714253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.836 [2024-12-05 21:21:47.714361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.836 [2024-12-05 21:21:47.714390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.836 [2024-12-05 21:21:47.714402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.836 [2024-12-05 21:21:47.714411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.836 [2024-12-05 21:21:47.714434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.836 qpair failed and we were unable to recover it. 00:28:39.836 [2024-12-05 21:21:47.724259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.836 [2024-12-05 21:21:47.724320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.836 [2024-12-05 21:21:47.724342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.836 [2024-12-05 21:21:47.724357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.836 [2024-12-05 21:21:47.724365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.836 [2024-12-05 21:21:47.724396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.836 qpair failed and we were unable to recover it. 00:28:39.837 [2024-12-05 21:21:47.734288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.837 [2024-12-05 21:21:47.734383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.837 [2024-12-05 21:21:47.734405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.837 [2024-12-05 21:21:47.734416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.837 [2024-12-05 21:21:47.734425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.837 [2024-12-05 21:21:47.734448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.837 qpair failed and we were unable to recover it. 00:28:39.837 [2024-12-05 21:21:47.744323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.837 [2024-12-05 21:21:47.744388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.837 [2024-12-05 21:21:47.744409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.837 [2024-12-05 21:21:47.744418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.837 [2024-12-05 21:21:47.744426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.837 [2024-12-05 21:21:47.744447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.837 qpair failed and we were unable to recover it. 00:28:39.837 [2024-12-05 21:21:47.754380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.837 [2024-12-05 21:21:47.754454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.837 [2024-12-05 21:21:47.754471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.837 [2024-12-05 21:21:47.754478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.837 [2024-12-05 21:21:47.754484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.837 [2024-12-05 21:21:47.754501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.837 qpair failed and we were unable to recover it. 00:28:39.837 [2024-12-05 21:21:47.764395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.837 [2024-12-05 21:21:47.764450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.837 [2024-12-05 21:21:47.764464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.837 [2024-12-05 21:21:47.764471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.837 [2024-12-05 21:21:47.764477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.837 [2024-12-05 21:21:47.764492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.837 qpair failed and we were unable to recover it. 00:28:39.837 [2024-12-05 21:21:47.774412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.837 [2024-12-05 21:21:47.774467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.837 [2024-12-05 21:21:47.774481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.837 [2024-12-05 21:21:47.774487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.837 [2024-12-05 21:21:47.774493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.837 [2024-12-05 21:21:47.774508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.837 qpair failed and we were unable to recover it. 00:28:39.837 [2024-12-05 21:21:47.784439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.837 [2024-12-05 21:21:47.784495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.837 [2024-12-05 21:21:47.784509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.837 [2024-12-05 21:21:47.784516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.837 [2024-12-05 21:21:47.784521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.837 [2024-12-05 21:21:47.784536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.837 qpair failed and we were unable to recover it. 00:28:39.837 [2024-12-05 21:21:47.794468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.837 [2024-12-05 21:21:47.794529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.837 [2024-12-05 21:21:47.794542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.837 [2024-12-05 21:21:47.794549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.837 [2024-12-05 21:21:47.794555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.837 [2024-12-05 21:21:47.794570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.837 qpair failed and we were unable to recover it. 00:28:39.837 [2024-12-05 21:21:47.804559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.837 [2024-12-05 21:21:47.804613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.837 [2024-12-05 21:21:47.804627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.837 [2024-12-05 21:21:47.804633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.837 [2024-12-05 21:21:47.804639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.837 [2024-12-05 21:21:47.804654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.837 qpair failed and we were unable to recover it. 00:28:39.837 [2024-12-05 21:21:47.814492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.837 [2024-12-05 21:21:47.814546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.837 [2024-12-05 21:21:47.814559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.837 [2024-12-05 21:21:47.814566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.837 [2024-12-05 21:21:47.814571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.837 [2024-12-05 21:21:47.814586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.837 qpair failed and we were unable to recover it. 00:28:39.837 [2024-12-05 21:21:47.824556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.837 [2024-12-05 21:21:47.824609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.837 [2024-12-05 21:21:47.824622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.837 [2024-12-05 21:21:47.824628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.837 [2024-12-05 21:21:47.824634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.837 [2024-12-05 21:21:47.824648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.837 qpair failed and we were unable to recover it. 00:28:39.837 [2024-12-05 21:21:47.834613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.837 [2024-12-05 21:21:47.834696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.837 [2024-12-05 21:21:47.834709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.837 [2024-12-05 21:21:47.834716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.837 [2024-12-05 21:21:47.834722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.837 [2024-12-05 21:21:47.834735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.837 qpair failed and we were unable to recover it. 00:28:39.837 [2024-12-05 21:21:47.844618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.837 [2024-12-05 21:21:47.844674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.837 [2024-12-05 21:21:47.844688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.837 [2024-12-05 21:21:47.844695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.837 [2024-12-05 21:21:47.844701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.837 [2024-12-05 21:21:47.844715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.837 qpair failed and we were unable to recover it. 00:28:39.837 [2024-12-05 21:21:47.854610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.837 [2024-12-05 21:21:47.854677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.837 [2024-12-05 21:21:47.854690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.837 [2024-12-05 21:21:47.854700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.837 [2024-12-05 21:21:47.854706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.837 [2024-12-05 21:21:47.854720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.837 qpair failed and we were unable to recover it. 00:28:39.837 [2024-12-05 21:21:47.864673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-12-05 21:21:47.864727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-12-05 21:21:47.864740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-12-05 21:21:47.864747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-12-05 21:21:47.864753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.838 [2024-12-05 21:21:47.864768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-12-05 21:21:47.874691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-12-05 21:21:47.874745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-12-05 21:21:47.874758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-12-05 21:21:47.874765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-12-05 21:21:47.874771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.838 [2024-12-05 21:21:47.874785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-12-05 21:21:47.884720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-12-05 21:21:47.884778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-12-05 21:21:47.884792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-12-05 21:21:47.884799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-12-05 21:21:47.884805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.838 [2024-12-05 21:21:47.884819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-12-05 21:21:47.894769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-12-05 21:21:47.894848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-12-05 21:21:47.894861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-12-05 21:21:47.894868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-12-05 21:21:47.894873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.838 [2024-12-05 21:21:47.894891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-12-05 21:21:47.904782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-12-05 21:21:47.904867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-12-05 21:21:47.904880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-12-05 21:21:47.904887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-12-05 21:21:47.904893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.838 [2024-12-05 21:21:47.904907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-12-05 21:21:47.914737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-12-05 21:21:47.914793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-12-05 21:21:47.914807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-12-05 21:21:47.914814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-12-05 21:21:47.914820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.838 [2024-12-05 21:21:47.914834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-12-05 21:21:47.924843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-12-05 21:21:47.924896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-12-05 21:21:47.924909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-12-05 21:21:47.924916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-12-05 21:21:47.924922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.838 [2024-12-05 21:21:47.924936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-12-05 21:21:47.934850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-12-05 21:21:47.934903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-12-05 21:21:47.934916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-12-05 21:21:47.934922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-12-05 21:21:47.934929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:39.838 [2024-12-05 21:21:47.934942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.838 qpair failed and we were unable to recover it. 00:28:40.098 [2024-12-05 21:21:47.944878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.098 [2024-12-05 21:21:47.944930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.098 [2024-12-05 21:21:47.944943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.098 [2024-12-05 21:21:47.944949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.098 [2024-12-05 21:21:47.944955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.098 [2024-12-05 21:21:47.944969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.098 qpair failed and we were unable to recover it. 00:28:40.098 [2024-12-05 21:21:47.954946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.098 [2024-12-05 21:21:47.954999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.098 [2024-12-05 21:21:47.955013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.098 [2024-12-05 21:21:47.955019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.098 [2024-12-05 21:21:47.955025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.098 [2024-12-05 21:21:47.955039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.098 qpair failed and we were unable to recover it. 00:28:40.098 [2024-12-05 21:21:47.964940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.098 [2024-12-05 21:21:47.965022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.098 [2024-12-05 21:21:47.965035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.098 [2024-12-05 21:21:47.965042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.098 [2024-12-05 21:21:47.965047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.098 [2024-12-05 21:21:47.965062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.098 qpair failed and we were unable to recover it. 00:28:40.098 [2024-12-05 21:21:47.974961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.098 [2024-12-05 21:21:47.975017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.098 [2024-12-05 21:21:47.975030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.098 [2024-12-05 21:21:47.975036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.098 [2024-12-05 21:21:47.975042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.098 [2024-12-05 21:21:47.975057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.098 qpair failed and we were unable to recover it. 00:28:40.098 [2024-12-05 21:21:47.984990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.098 [2024-12-05 21:21:47.985039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.098 [2024-12-05 21:21:47.985054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.098 [2024-12-05 21:21:47.985061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.098 [2024-12-05 21:21:47.985067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.098 [2024-12-05 21:21:47.985081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.098 qpair failed and we were unable to recover it. 00:28:40.098 [2024-12-05 21:21:47.995061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.098 [2024-12-05 21:21:47.995137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.098 [2024-12-05 21:21:47.995150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.098 [2024-12-05 21:21:47.995157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.098 [2024-12-05 21:21:47.995162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.098 [2024-12-05 21:21:47.995176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.098 qpair failed and we were unable to recover it. 00:28:40.098 [2024-12-05 21:21:48.005092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.098 [2024-12-05 21:21:48.005154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.098 [2024-12-05 21:21:48.005188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.098 [2024-12-05 21:21:48.005195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.098 [2024-12-05 21:21:48.005201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.098 [2024-12-05 21:21:48.005224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.098 qpair failed and we were unable to recover it. 00:28:40.098 [2024-12-05 21:21:48.015100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.098 [2024-12-05 21:21:48.015159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.098 [2024-12-05 21:21:48.015173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.098 [2024-12-05 21:21:48.015180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.098 [2024-12-05 21:21:48.015186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.098 [2024-12-05 21:21:48.015201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.098 qpair failed and we were unable to recover it. 00:28:40.098 [2024-12-05 21:21:48.025109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.098 [2024-12-05 21:21:48.025166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.098 [2024-12-05 21:21:48.025180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.098 [2024-12-05 21:21:48.025187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.098 [2024-12-05 21:21:48.025195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.098 [2024-12-05 21:21:48.025210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.098 qpair failed and we were unable to recover it. 00:28:40.098 [2024-12-05 21:21:48.035143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.098 [2024-12-05 21:21:48.035200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.098 [2024-12-05 21:21:48.035213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.098 [2024-12-05 21:21:48.035219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.098 [2024-12-05 21:21:48.035225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.098 [2024-12-05 21:21:48.035240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.098 qpair failed and we were unable to recover it. 00:28:40.098 [2024-12-05 21:21:48.045299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.098 [2024-12-05 21:21:48.045361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.099 [2024-12-05 21:21:48.045378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.099 [2024-12-05 21:21:48.045385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.099 [2024-12-05 21:21:48.045391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.099 [2024-12-05 21:21:48.045406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.099 qpair failed and we were unable to recover it. 00:28:40.099 [2024-12-05 21:21:48.055248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.099 [2024-12-05 21:21:48.055302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.099 [2024-12-05 21:21:48.055316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.099 [2024-12-05 21:21:48.055322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.099 [2024-12-05 21:21:48.055328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.099 [2024-12-05 21:21:48.055342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.099 qpair failed and we were unable to recover it. 00:28:40.099 [2024-12-05 21:21:48.065262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.099 [2024-12-05 21:21:48.065344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.099 [2024-12-05 21:21:48.065358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.099 [2024-12-05 21:21:48.065364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.099 [2024-12-05 21:21:48.065375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.099 [2024-12-05 21:21:48.065389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.099 qpair failed and we were unable to recover it. 00:28:40.099 [2024-12-05 21:21:48.075333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.099 [2024-12-05 21:21:48.075405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.099 [2024-12-05 21:21:48.075418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.099 [2024-12-05 21:21:48.075425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.099 [2024-12-05 21:21:48.075431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.099 [2024-12-05 21:21:48.075445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.099 qpair failed and we were unable to recover it. 00:28:40.099 [2024-12-05 21:21:48.085303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.099 [2024-12-05 21:21:48.085357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.099 [2024-12-05 21:21:48.085374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.099 [2024-12-05 21:21:48.085381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.099 [2024-12-05 21:21:48.085387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.099 [2024-12-05 21:21:48.085402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.099 qpair failed and we were unable to recover it. 00:28:40.099 [2024-12-05 21:21:48.095340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.099 [2024-12-05 21:21:48.095398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.099 [2024-12-05 21:21:48.095411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.099 [2024-12-05 21:21:48.095417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.099 [2024-12-05 21:21:48.095423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.099 [2024-12-05 21:21:48.095438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.099 qpair failed and we were unable to recover it. 00:28:40.099 [2024-12-05 21:21:48.105382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.099 [2024-12-05 21:21:48.105441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.099 [2024-12-05 21:21:48.105454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.099 [2024-12-05 21:21:48.105461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.099 [2024-12-05 21:21:48.105467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.099 [2024-12-05 21:21:48.105481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.099 qpair failed and we were unable to recover it. 00:28:40.099 [2024-12-05 21:21:48.115363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.099 [2024-12-05 21:21:48.115420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.099 [2024-12-05 21:21:48.115437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.099 [2024-12-05 21:21:48.115444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.099 [2024-12-05 21:21:48.115450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.099 [2024-12-05 21:21:48.115464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.099 qpair failed and we were unable to recover it. 00:28:40.099 [2024-12-05 21:21:48.125415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.099 [2024-12-05 21:21:48.125474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.099 [2024-12-05 21:21:48.125488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.099 [2024-12-05 21:21:48.125495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.099 [2024-12-05 21:21:48.125500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.099 [2024-12-05 21:21:48.125515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.099 qpair failed and we were unable to recover it. 00:28:40.099 [2024-12-05 21:21:48.135388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.099 [2024-12-05 21:21:48.135450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.099 [2024-12-05 21:21:48.135463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.099 [2024-12-05 21:21:48.135470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.099 [2024-12-05 21:21:48.135476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.099 [2024-12-05 21:21:48.135490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.099 qpair failed and we were unable to recover it. 00:28:40.099 [2024-12-05 21:21:48.145475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.099 [2024-12-05 21:21:48.145527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.099 [2024-12-05 21:21:48.145540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.099 [2024-12-05 21:21:48.145546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.099 [2024-12-05 21:21:48.145553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.099 [2024-12-05 21:21:48.145567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.099 qpair failed and we were unable to recover it. 00:28:40.099 [2024-12-05 21:21:48.155473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.099 [2024-12-05 21:21:48.155527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.099 [2024-12-05 21:21:48.155539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.099 [2024-12-05 21:21:48.155546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.099 [2024-12-05 21:21:48.155554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.099 [2024-12-05 21:21:48.155569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.099 qpair failed and we were unable to recover it. 00:28:40.099 [2024-12-05 21:21:48.165497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.099 [2024-12-05 21:21:48.165551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.099 [2024-12-05 21:21:48.165564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.099 [2024-12-05 21:21:48.165571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.099 [2024-12-05 21:21:48.165577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.099 [2024-12-05 21:21:48.165591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.099 qpair failed and we were unable to recover it. 00:28:40.099 [2024-12-05 21:21:48.175519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.099 [2024-12-05 21:21:48.175571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.099 [2024-12-05 21:21:48.175584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.099 [2024-12-05 21:21:48.175590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.099 [2024-12-05 21:21:48.175596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.099 [2024-12-05 21:21:48.175610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.099 qpair failed and we were unable to recover it. 00:28:40.099 [2024-12-05 21:21:48.185610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.100 [2024-12-05 21:21:48.185660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.100 [2024-12-05 21:21:48.185673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.100 [2024-12-05 21:21:48.185679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.100 [2024-12-05 21:21:48.185685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.100 [2024-12-05 21:21:48.185700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.100 qpair failed and we were unable to recover it. 00:28:40.100 [2024-12-05 21:21:48.195584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.100 [2024-12-05 21:21:48.195640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.100 [2024-12-05 21:21:48.195652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.100 [2024-12-05 21:21:48.195659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.100 [2024-12-05 21:21:48.195665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.100 [2024-12-05 21:21:48.195679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.100 qpair failed and we were unable to recover it. 00:28:40.360 [2024-12-05 21:21:48.205604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.360 [2024-12-05 21:21:48.205654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.360 [2024-12-05 21:21:48.205668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.360 [2024-12-05 21:21:48.205674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.360 [2024-12-05 21:21:48.205680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.360 [2024-12-05 21:21:48.205694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.360 qpair failed and we were unable to recover it. 00:28:40.360 [2024-12-05 21:21:48.215640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.360 [2024-12-05 21:21:48.215696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.360 [2024-12-05 21:21:48.215709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.360 [2024-12-05 21:21:48.215715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.360 [2024-12-05 21:21:48.215721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.360 [2024-12-05 21:21:48.215736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.360 qpair failed and we were unable to recover it. 00:28:40.360 [2024-12-05 21:21:48.225632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.360 [2024-12-05 21:21:48.225697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.360 [2024-12-05 21:21:48.225710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.360 [2024-12-05 21:21:48.225717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.360 [2024-12-05 21:21:48.225722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.360 [2024-12-05 21:21:48.225737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.360 qpair failed and we were unable to recover it. 00:28:40.360 [2024-12-05 21:21:48.235685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.360 [2024-12-05 21:21:48.235782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.360 [2024-12-05 21:21:48.235794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.360 [2024-12-05 21:21:48.235801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.360 [2024-12-05 21:21:48.235806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.360 [2024-12-05 21:21:48.235820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.360 qpair failed and we were unable to recover it. 00:28:40.360 [2024-12-05 21:21:48.245719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.360 [2024-12-05 21:21:48.245774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.360 [2024-12-05 21:21:48.245789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.360 [2024-12-05 21:21:48.245796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.360 [2024-12-05 21:21:48.245802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.360 [2024-12-05 21:21:48.245815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.360 qpair failed and we were unable to recover it. 00:28:40.360 [2024-12-05 21:21:48.255745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.360 [2024-12-05 21:21:48.255798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.360 [2024-12-05 21:21:48.255811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.360 [2024-12-05 21:21:48.255818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.360 [2024-12-05 21:21:48.255824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.360 [2024-12-05 21:21:48.255838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.360 qpair failed and we were unable to recover it. 00:28:40.360 [2024-12-05 21:21:48.265809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.360 [2024-12-05 21:21:48.265869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.360 [2024-12-05 21:21:48.265881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.360 [2024-12-05 21:21:48.265888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.360 [2024-12-05 21:21:48.265894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.360 [2024-12-05 21:21:48.265908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.360 qpair failed and we were unable to recover it. 00:28:40.360 [2024-12-05 21:21:48.275806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.360 [2024-12-05 21:21:48.275861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.360 [2024-12-05 21:21:48.275874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.360 [2024-12-05 21:21:48.275880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.360 [2024-12-05 21:21:48.275886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.360 [2024-12-05 21:21:48.275901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.360 qpair failed and we were unable to recover it. 00:28:40.360 [2024-12-05 21:21:48.285833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.360 [2024-12-05 21:21:48.285885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.360 [2024-12-05 21:21:48.285898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.360 [2024-12-05 21:21:48.285909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.360 [2024-12-05 21:21:48.285915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.360 [2024-12-05 21:21:48.285929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.360 qpair failed and we were unable to recover it. 00:28:40.360 [2024-12-05 21:21:48.295844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.360 [2024-12-05 21:21:48.295896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.360 [2024-12-05 21:21:48.295909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.360 [2024-12-05 21:21:48.295915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.360 [2024-12-05 21:21:48.295922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.360 [2024-12-05 21:21:48.295936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.360 qpair failed and we were unable to recover it. 00:28:40.360 [2024-12-05 21:21:48.305878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.360 [2024-12-05 21:21:48.305927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.360 [2024-12-05 21:21:48.305940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.360 [2024-12-05 21:21:48.305947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.360 [2024-12-05 21:21:48.305953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.360 [2024-12-05 21:21:48.305966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.360 qpair failed and we were unable to recover it. 00:28:40.360 [2024-12-05 21:21:48.315917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.360 [2024-12-05 21:21:48.315971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.360 [2024-12-05 21:21:48.315984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.360 [2024-12-05 21:21:48.315991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.360 [2024-12-05 21:21:48.315996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.360 [2024-12-05 21:21:48.316011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.360 qpair failed and we were unable to recover it. 00:28:40.360 [2024-12-05 21:21:48.325937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.360 [2024-12-05 21:21:48.325995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.360 [2024-12-05 21:21:48.326008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.360 [2024-12-05 21:21:48.326014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.361 [2024-12-05 21:21:48.326020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.361 [2024-12-05 21:21:48.326034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.361 qpair failed and we were unable to recover it. 00:28:40.361 [2024-12-05 21:21:48.336057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.361 [2024-12-05 21:21:48.336132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.361 [2024-12-05 21:21:48.336145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.361 [2024-12-05 21:21:48.336152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.361 [2024-12-05 21:21:48.336158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.361 [2024-12-05 21:21:48.336172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.361 qpair failed and we were unable to recover it. 00:28:40.361 [2024-12-05 21:21:48.346002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.361 [2024-12-05 21:21:48.346085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.361 [2024-12-05 21:21:48.346098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.361 [2024-12-05 21:21:48.346105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.361 [2024-12-05 21:21:48.346111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.361 [2024-12-05 21:21:48.346125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.361 qpair failed and we were unable to recover it. 00:28:40.361 [2024-12-05 21:21:48.356095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.361 [2024-12-05 21:21:48.356179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.361 [2024-12-05 21:21:48.356192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.361 [2024-12-05 21:21:48.356198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.361 [2024-12-05 21:21:48.356204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.361 [2024-12-05 21:21:48.356218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.361 qpair failed and we were unable to recover it. 00:28:40.361 [2024-12-05 21:21:48.365994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.361 [2024-12-05 21:21:48.366049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.361 [2024-12-05 21:21:48.366062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.361 [2024-12-05 21:21:48.366068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.361 [2024-12-05 21:21:48.366075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.361 [2024-12-05 21:21:48.366090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.361 qpair failed and we were unable to recover it. 00:28:40.361 [2024-12-05 21:21:48.376137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.361 [2024-12-05 21:21:48.376238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.361 [2024-12-05 21:21:48.376251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.361 [2024-12-05 21:21:48.376257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.361 [2024-12-05 21:21:48.376263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.361 [2024-12-05 21:21:48.376277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.361 qpair failed and we were unable to recover it. 00:28:40.361 [2024-12-05 21:21:48.386163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.361 [2024-12-05 21:21:48.386268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.361 [2024-12-05 21:21:48.386281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.361 [2024-12-05 21:21:48.386287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.361 [2024-12-05 21:21:48.386293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.361 [2024-12-05 21:21:48.386307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.361 qpair failed and we were unable to recover it. 00:28:40.361 [2024-12-05 21:21:48.396162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.361 [2024-12-05 21:21:48.396219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.361 [2024-12-05 21:21:48.396232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.361 [2024-12-05 21:21:48.396239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.361 [2024-12-05 21:21:48.396244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.361 [2024-12-05 21:21:48.396259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.361 qpair failed and we were unable to recover it. 00:28:40.361 [2024-12-05 21:21:48.406179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.361 [2024-12-05 21:21:48.406230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.361 [2024-12-05 21:21:48.406244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.361 [2024-12-05 21:21:48.406250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.361 [2024-12-05 21:21:48.406256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.361 [2024-12-05 21:21:48.406270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.361 qpair failed and we were unable to recover it. 00:28:40.361 [2024-12-05 21:21:48.416223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.361 [2024-12-05 21:21:48.416280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.361 [2024-12-05 21:21:48.416293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.361 [2024-12-05 21:21:48.416303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.361 [2024-12-05 21:21:48.416308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.361 [2024-12-05 21:21:48.416323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.361 qpair failed and we were unable to recover it. 00:28:40.361 [2024-12-05 21:21:48.426223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.361 [2024-12-05 21:21:48.426277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.361 [2024-12-05 21:21:48.426290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.361 [2024-12-05 21:21:48.426296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.361 [2024-12-05 21:21:48.426302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.361 [2024-12-05 21:21:48.426317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.361 qpair failed and we were unable to recover it. 00:28:40.361 [2024-12-05 21:21:48.436261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.361 [2024-12-05 21:21:48.436317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.361 [2024-12-05 21:21:48.436330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.361 [2024-12-05 21:21:48.436337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.361 [2024-12-05 21:21:48.436343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.361 [2024-12-05 21:21:48.436357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.361 qpair failed and we were unable to recover it. 00:28:40.361 [2024-12-05 21:21:48.446297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.361 [2024-12-05 21:21:48.446350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.361 [2024-12-05 21:21:48.446363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.361 [2024-12-05 21:21:48.446374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.361 [2024-12-05 21:21:48.446379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.361 [2024-12-05 21:21:48.446395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.361 qpair failed and we were unable to recover it. 00:28:40.361 [2024-12-05 21:21:48.456330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.361 [2024-12-05 21:21:48.456388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.361 [2024-12-05 21:21:48.456401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.361 [2024-12-05 21:21:48.456408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.361 [2024-12-05 21:21:48.456414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.361 [2024-12-05 21:21:48.456432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.361 qpair failed and we were unable to recover it. 00:28:40.620 [2024-12-05 21:21:48.466360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.620 [2024-12-05 21:21:48.466414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.620 [2024-12-05 21:21:48.466427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.620 [2024-12-05 21:21:48.466434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.620 [2024-12-05 21:21:48.466440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.620 [2024-12-05 21:21:48.466454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.620 qpair failed and we were unable to recover it. 00:28:40.620 [2024-12-05 21:21:48.476400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.620 [2024-12-05 21:21:48.476457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.620 [2024-12-05 21:21:48.476470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.620 [2024-12-05 21:21:48.476477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.620 [2024-12-05 21:21:48.476483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.620 [2024-12-05 21:21:48.476498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.620 qpair failed and we were unable to recover it. 00:28:40.620 [2024-12-05 21:21:48.486406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.620 [2024-12-05 21:21:48.486454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.620 [2024-12-05 21:21:48.486466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.620 [2024-12-05 21:21:48.486472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.620 [2024-12-05 21:21:48.486478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.620 [2024-12-05 21:21:48.486493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.620 qpair failed and we were unable to recover it. 00:28:40.620 [2024-12-05 21:21:48.496480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.620 [2024-12-05 21:21:48.496542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.620 [2024-12-05 21:21:48.496556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.620 [2024-12-05 21:21:48.496563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.620 [2024-12-05 21:21:48.496569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.620 [2024-12-05 21:21:48.496583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.620 qpair failed and we were unable to recover it. 00:28:40.620 [2024-12-05 21:21:48.506464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.620 [2024-12-05 21:21:48.506520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.620 [2024-12-05 21:21:48.506533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.620 [2024-12-05 21:21:48.506539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.620 [2024-12-05 21:21:48.506545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.620 [2024-12-05 21:21:48.506559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.620 qpair failed and we were unable to recover it. 00:28:40.620 [2024-12-05 21:21:48.516538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.620 [2024-12-05 21:21:48.516595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.620 [2024-12-05 21:21:48.516608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.620 [2024-12-05 21:21:48.516615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.620 [2024-12-05 21:21:48.516621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.620 [2024-12-05 21:21:48.516635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.620 qpair failed and we were unable to recover it. 00:28:40.620 [2024-12-05 21:21:48.526544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.620 [2024-12-05 21:21:48.526612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.620 [2024-12-05 21:21:48.526625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.620 [2024-12-05 21:21:48.526631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.620 [2024-12-05 21:21:48.526637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.620 [2024-12-05 21:21:48.526651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.620 qpair failed and we were unable to recover it. 00:28:40.620 [2024-12-05 21:21:48.536575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.620 [2024-12-05 21:21:48.536643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.620 [2024-12-05 21:21:48.536656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.620 [2024-12-05 21:21:48.536663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.620 [2024-12-05 21:21:48.536669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.620 [2024-12-05 21:21:48.536683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.620 qpair failed and we were unable to recover it. 00:28:40.620 [2024-12-05 21:21:48.546634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.620 [2024-12-05 21:21:48.546682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.620 [2024-12-05 21:21:48.546697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.620 [2024-12-05 21:21:48.546704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.620 [2024-12-05 21:21:48.546710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.620 [2024-12-05 21:21:48.546724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.620 qpair failed and we were unable to recover it. 00:28:40.620 [2024-12-05 21:21:48.556625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.620 [2024-12-05 21:21:48.556679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.620 [2024-12-05 21:21:48.556692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.620 [2024-12-05 21:21:48.556698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.620 [2024-12-05 21:21:48.556704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.620 [2024-12-05 21:21:48.556718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.620 qpair failed and we were unable to recover it. 00:28:40.620 [2024-12-05 21:21:48.566662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.620 [2024-12-05 21:21:48.566762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.620 [2024-12-05 21:21:48.566775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.620 [2024-12-05 21:21:48.566782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.620 [2024-12-05 21:21:48.566787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.620 [2024-12-05 21:21:48.566802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.620 qpair failed and we were unable to recover it. 00:28:40.620 [2024-12-05 21:21:48.576686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.620 [2024-12-05 21:21:48.576738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.620 [2024-12-05 21:21:48.576751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.620 [2024-12-05 21:21:48.576758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.620 [2024-12-05 21:21:48.576764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.620 [2024-12-05 21:21:48.576778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.620 qpair failed and we were unable to recover it. 00:28:40.621 [2024-12-05 21:21:48.586706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.621 [2024-12-05 21:21:48.586775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.621 [2024-12-05 21:21:48.586788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.621 [2024-12-05 21:21:48.586795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.621 [2024-12-05 21:21:48.586804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.621 [2024-12-05 21:21:48.586817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.621 qpair failed and we were unable to recover it. 00:28:40.621 [2024-12-05 21:21:48.596712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.621 [2024-12-05 21:21:48.596814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.621 [2024-12-05 21:21:48.596826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.621 [2024-12-05 21:21:48.596833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.621 [2024-12-05 21:21:48.596838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.621 [2024-12-05 21:21:48.596852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.621 qpair failed and we were unable to recover it. 00:28:40.621 [2024-12-05 21:21:48.606764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.621 [2024-12-05 21:21:48.606817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.621 [2024-12-05 21:21:48.606830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.621 [2024-12-05 21:21:48.606836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.621 [2024-12-05 21:21:48.606842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.621 [2024-12-05 21:21:48.606856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.621 qpair failed and we were unable to recover it. 00:28:40.621 [2024-12-05 21:21:48.616794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.621 [2024-12-05 21:21:48.616847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.621 [2024-12-05 21:21:48.616860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.621 [2024-12-05 21:21:48.616866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.621 [2024-12-05 21:21:48.616872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.621 [2024-12-05 21:21:48.616886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.621 qpair failed and we were unable to recover it. 00:28:40.621 [2024-12-05 21:21:48.626817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.621 [2024-12-05 21:21:48.626899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.621 [2024-12-05 21:21:48.626912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.621 [2024-12-05 21:21:48.626918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.621 [2024-12-05 21:21:48.626924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.621 [2024-12-05 21:21:48.626938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.621 qpair failed and we were unable to recover it. 00:28:40.621 [2024-12-05 21:21:48.636836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.621 [2024-12-05 21:21:48.636888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.621 [2024-12-05 21:21:48.636901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.621 [2024-12-05 21:21:48.636907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.621 [2024-12-05 21:21:48.636914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.621 [2024-12-05 21:21:48.636928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.621 qpair failed and we were unable to recover it. 00:28:40.621 [2024-12-05 21:21:48.646890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.621 [2024-12-05 21:21:48.646946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.621 [2024-12-05 21:21:48.646959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.621 [2024-12-05 21:21:48.646966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.621 [2024-12-05 21:21:48.646972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.621 [2024-12-05 21:21:48.646986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.621 qpair failed and we were unable to recover it. 00:28:40.621 [2024-12-05 21:21:48.656900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.621 [2024-12-05 21:21:48.656955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.621 [2024-12-05 21:21:48.656968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.621 [2024-12-05 21:21:48.656974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.621 [2024-12-05 21:21:48.656980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.621 [2024-12-05 21:21:48.656994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.621 qpair failed and we were unable to recover it. 00:28:40.621 [2024-12-05 21:21:48.666932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.621 [2024-12-05 21:21:48.666984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.621 [2024-12-05 21:21:48.666997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.621 [2024-12-05 21:21:48.667003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.621 [2024-12-05 21:21:48.667009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.621 [2024-12-05 21:21:48.667023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.621 qpair failed and we were unable to recover it. 00:28:40.621 [2024-12-05 21:21:48.677001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.621 [2024-12-05 21:21:48.677079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.621 [2024-12-05 21:21:48.677095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.621 [2024-12-05 21:21:48.677101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.621 [2024-12-05 21:21:48.677107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.621 [2024-12-05 21:21:48.677121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.621 qpair failed and we were unable to recover it. 00:28:40.621 [2024-12-05 21:21:48.686979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.621 [2024-12-05 21:21:48.687040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.621 [2024-12-05 21:21:48.687053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.621 [2024-12-05 21:21:48.687060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.621 [2024-12-05 21:21:48.687065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.621 [2024-12-05 21:21:48.687080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.621 qpair failed and we were unable to recover it. 00:28:40.621 [2024-12-05 21:21:48.697020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.621 [2024-12-05 21:21:48.697075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.621 [2024-12-05 21:21:48.697088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.621 [2024-12-05 21:21:48.697094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.621 [2024-12-05 21:21:48.697100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.621 [2024-12-05 21:21:48.697114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.621 qpair failed and we were unable to recover it. 00:28:40.621 [2024-12-05 21:21:48.707013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.621 [2024-12-05 21:21:48.707067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.621 [2024-12-05 21:21:48.707079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.621 [2024-12-05 21:21:48.707086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.621 [2024-12-05 21:21:48.707092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.621 [2024-12-05 21:21:48.707105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.621 qpair failed and we were unable to recover it. 00:28:40.621 [2024-12-05 21:21:48.717083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.621 [2024-12-05 21:21:48.717138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.621 [2024-12-05 21:21:48.717150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.621 [2024-12-05 21:21:48.717157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.621 [2024-12-05 21:21:48.717166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.621 [2024-12-05 21:21:48.717180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.621 qpair failed and we were unable to recover it. 00:28:40.880 [2024-12-05 21:21:48.727120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.880 [2024-12-05 21:21:48.727177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.880 [2024-12-05 21:21:48.727191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.880 [2024-12-05 21:21:48.727198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.880 [2024-12-05 21:21:48.727204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.880 [2024-12-05 21:21:48.727218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.880 qpair failed and we were unable to recover it. 00:28:40.880 [2024-12-05 21:21:48.737167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.880 [2024-12-05 21:21:48.737244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.880 [2024-12-05 21:21:48.737257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.880 [2024-12-05 21:21:48.737264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.880 [2024-12-05 21:21:48.737270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.880 [2024-12-05 21:21:48.737284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.880 qpair failed and we were unable to recover it. 00:28:40.880 [2024-12-05 21:21:48.747174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.880 [2024-12-05 21:21:48.747226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.880 [2024-12-05 21:21:48.747240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.880 [2024-12-05 21:21:48.747246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.880 [2024-12-05 21:21:48.747252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.880 [2024-12-05 21:21:48.747267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.880 qpair failed and we were unable to recover it. 00:28:40.880 [2024-12-05 21:21:48.757178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.881 [2024-12-05 21:21:48.757237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.881 [2024-12-05 21:21:48.757256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.881 [2024-12-05 21:21:48.757264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.881 [2024-12-05 21:21:48.757273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.881 [2024-12-05 21:21:48.757293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.881 qpair failed and we were unable to recover it. 00:28:40.881 [2024-12-05 21:21:48.767226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.881 [2024-12-05 21:21:48.767284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.881 [2024-12-05 21:21:48.767300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.881 [2024-12-05 21:21:48.767308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.881 [2024-12-05 21:21:48.767314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.881 [2024-12-05 21:21:48.767330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.881 qpair failed and we were unable to recover it. 00:28:40.881 [2024-12-05 21:21:48.777288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.881 [2024-12-05 21:21:48.777345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.881 [2024-12-05 21:21:48.777359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.881 [2024-12-05 21:21:48.777370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.881 [2024-12-05 21:21:48.777377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.881 [2024-12-05 21:21:48.777392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.881 qpair failed and we were unable to recover it. 00:28:40.881 [2024-12-05 21:21:48.787192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.881 [2024-12-05 21:21:48.787251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.881 [2024-12-05 21:21:48.787265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.881 [2024-12-05 21:21:48.787272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.881 [2024-12-05 21:21:48.787278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.881 [2024-12-05 21:21:48.787293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.881 qpair failed and we were unable to recover it. 00:28:40.881 [2024-12-05 21:21:48.797255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.881 [2024-12-05 21:21:48.797320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.881 [2024-12-05 21:21:48.797335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.881 [2024-12-05 21:21:48.797343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.881 [2024-12-05 21:21:48.797349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.881 [2024-12-05 21:21:48.797363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.881 qpair failed and we were unable to recover it. 00:28:40.881 [2024-12-05 21:21:48.807333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.881 [2024-12-05 21:21:48.807399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.881 [2024-12-05 21:21:48.807416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.881 [2024-12-05 21:21:48.807424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.881 [2024-12-05 21:21:48.807433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.881 [2024-12-05 21:21:48.807451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.881 qpair failed and we were unable to recover it. 00:28:40.881 [2024-12-05 21:21:48.817359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.881 [2024-12-05 21:21:48.817467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.881 [2024-12-05 21:21:48.817481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.881 [2024-12-05 21:21:48.817487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.881 [2024-12-05 21:21:48.817494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.881 [2024-12-05 21:21:48.817508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.881 qpair failed and we were unable to recover it. 00:28:40.881 [2024-12-05 21:21:48.827402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.881 [2024-12-05 21:21:48.827461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.881 [2024-12-05 21:21:48.827474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.881 [2024-12-05 21:21:48.827480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.881 [2024-12-05 21:21:48.827486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.881 [2024-12-05 21:21:48.827501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.881 qpair failed and we were unable to recover it. 00:28:40.881 [2024-12-05 21:21:48.837410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.881 [2024-12-05 21:21:48.837463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.881 [2024-12-05 21:21:48.837476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.881 [2024-12-05 21:21:48.837483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.881 [2024-12-05 21:21:48.837488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.881 [2024-12-05 21:21:48.837503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.881 qpair failed and we were unable to recover it. 00:28:40.881 [2024-12-05 21:21:48.847509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.881 [2024-12-05 21:21:48.847595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.881 [2024-12-05 21:21:48.847609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.881 [2024-12-05 21:21:48.847618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.881 [2024-12-05 21:21:48.847624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.881 [2024-12-05 21:21:48.847639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.881 qpair failed and we were unable to recover it. 00:28:40.881 [2024-12-05 21:21:48.857429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.881 [2024-12-05 21:21:48.857495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.882 [2024-12-05 21:21:48.857508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.882 [2024-12-05 21:21:48.857514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.882 [2024-12-05 21:21:48.857520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.882 [2024-12-05 21:21:48.857535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.882 qpair failed and we were unable to recover it. 00:28:40.882 [2024-12-05 21:21:48.867520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.882 [2024-12-05 21:21:48.867576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.882 [2024-12-05 21:21:48.867589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.882 [2024-12-05 21:21:48.867596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.882 [2024-12-05 21:21:48.867602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.882 [2024-12-05 21:21:48.867616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.882 qpair failed and we were unable to recover it. 00:28:40.882 [2024-12-05 21:21:48.877510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.882 [2024-12-05 21:21:48.877564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.882 [2024-12-05 21:21:48.877577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.882 [2024-12-05 21:21:48.877583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.882 [2024-12-05 21:21:48.877589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.882 [2024-12-05 21:21:48.877603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.882 qpair failed and we were unable to recover it. 00:28:40.882 [2024-12-05 21:21:48.887556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.882 [2024-12-05 21:21:48.887608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.882 [2024-12-05 21:21:48.887622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.882 [2024-12-05 21:21:48.887628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.882 [2024-12-05 21:21:48.887634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.882 [2024-12-05 21:21:48.887652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.882 qpair failed and we were unable to recover it. 00:28:40.882 [2024-12-05 21:21:48.897609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.882 [2024-12-05 21:21:48.897659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.882 [2024-12-05 21:21:48.897672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.882 [2024-12-05 21:21:48.897678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.882 [2024-12-05 21:21:48.897684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.882 [2024-12-05 21:21:48.897699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.882 qpair failed and we were unable to recover it. 00:28:40.882 [2024-12-05 21:21:48.907559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.882 [2024-12-05 21:21:48.907608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.882 [2024-12-05 21:21:48.907621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.882 [2024-12-05 21:21:48.907627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.882 [2024-12-05 21:21:48.907634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.882 [2024-12-05 21:21:48.907648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.882 qpair failed and we were unable to recover it. 00:28:40.882 [2024-12-05 21:21:48.917643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.882 [2024-12-05 21:21:48.917694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.882 [2024-12-05 21:21:48.917707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.882 [2024-12-05 21:21:48.917714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.882 [2024-12-05 21:21:48.917719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.882 [2024-12-05 21:21:48.917734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.882 qpair failed and we were unable to recover it. 00:28:40.882 [2024-12-05 21:21:48.927595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.882 [2024-12-05 21:21:48.927655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.882 [2024-12-05 21:21:48.927668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.882 [2024-12-05 21:21:48.927675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.882 [2024-12-05 21:21:48.927681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.882 [2024-12-05 21:21:48.927695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.882 qpair failed and we were unable to recover it. 00:28:40.882 [2024-12-05 21:21:48.937723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.882 [2024-12-05 21:21:48.937787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.882 [2024-12-05 21:21:48.937800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.882 [2024-12-05 21:21:48.937806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.882 [2024-12-05 21:21:48.937812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.882 [2024-12-05 21:21:48.937826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.882 qpair failed and we were unable to recover it. 00:28:40.882 [2024-12-05 21:21:48.947662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.882 [2024-12-05 21:21:48.947717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.882 [2024-12-05 21:21:48.947730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.882 [2024-12-05 21:21:48.947736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.882 [2024-12-05 21:21:48.947742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.882 [2024-12-05 21:21:48.947757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.882 qpair failed and we were unable to recover it. 00:28:40.882 [2024-12-05 21:21:48.957752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.882 [2024-12-05 21:21:48.957810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.882 [2024-12-05 21:21:48.957823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.882 [2024-12-05 21:21:48.957829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.882 [2024-12-05 21:21:48.957835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.882 [2024-12-05 21:21:48.957850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.882 qpair failed and we were unable to recover it. 00:28:40.883 [2024-12-05 21:21:48.967774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.883 [2024-12-05 21:21:48.967829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.883 [2024-12-05 21:21:48.967842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.883 [2024-12-05 21:21:48.967848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.883 [2024-12-05 21:21:48.967854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.883 [2024-12-05 21:21:48.967868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.883 qpair failed and we were unable to recover it. 00:28:40.883 [2024-12-05 21:21:48.977775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.883 [2024-12-05 21:21:48.977829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.883 [2024-12-05 21:21:48.977841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.883 [2024-12-05 21:21:48.977851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.883 [2024-12-05 21:21:48.977857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:40.883 [2024-12-05 21:21:48.977872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.883 qpair failed and we were unable to recover it. 00:28:41.142 [2024-12-05 21:21:48.987842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.142 [2024-12-05 21:21:48.987929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.142 [2024-12-05 21:21:48.987942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.142 [2024-12-05 21:21:48.987949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.142 [2024-12-05 21:21:48.987954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.142 [2024-12-05 21:21:48.987968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.142 qpair failed and we were unable to recover it. 00:28:41.142 [2024-12-05 21:21:48.997861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.142 [2024-12-05 21:21:48.997937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.142 [2024-12-05 21:21:48.997950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.142 [2024-12-05 21:21:48.997957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.142 [2024-12-05 21:21:48.997962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.142 [2024-12-05 21:21:48.997977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.142 qpair failed and we were unable to recover it. 00:28:41.142 [2024-12-05 21:21:49.007883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.142 [2024-12-05 21:21:49.007939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.142 [2024-12-05 21:21:49.007951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.142 [2024-12-05 21:21:49.007957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.142 [2024-12-05 21:21:49.007963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.142 [2024-12-05 21:21:49.007977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.142 qpair failed and we were unable to recover it. 00:28:41.142 [2024-12-05 21:21:49.017875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.142 [2024-12-05 21:21:49.017933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.142 [2024-12-05 21:21:49.017947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.142 [2024-12-05 21:21:49.017954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.142 [2024-12-05 21:21:49.017960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.142 [2024-12-05 21:21:49.017979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.142 qpair failed and we were unable to recover it. 00:28:41.142 [2024-12-05 21:21:49.027867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.142 [2024-12-05 21:21:49.027920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.142 [2024-12-05 21:21:49.027933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.142 [2024-12-05 21:21:49.027939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.142 [2024-12-05 21:21:49.027945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.142 [2024-12-05 21:21:49.027960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.142 qpair failed and we were unable to recover it. 00:28:41.142 [2024-12-05 21:21:49.037999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.142 [2024-12-05 21:21:49.038055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.142 [2024-12-05 21:21:49.038068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.142 [2024-12-05 21:21:49.038075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.142 [2024-12-05 21:21:49.038081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.142 [2024-12-05 21:21:49.038095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.142 qpair failed and we were unable to recover it. 00:28:41.143 [2024-12-05 21:21:49.048073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.143 [2024-12-05 21:21:49.048156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.143 [2024-12-05 21:21:49.048169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.143 [2024-12-05 21:21:49.048176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.143 [2024-12-05 21:21:49.048182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.143 [2024-12-05 21:21:49.048196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.143 qpair failed and we were unable to recover it. 00:28:41.143 [2024-12-05 21:21:49.058024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.143 [2024-12-05 21:21:49.058105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.143 [2024-12-05 21:21:49.058118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.143 [2024-12-05 21:21:49.058125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.143 [2024-12-05 21:21:49.058131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.143 [2024-12-05 21:21:49.058145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.143 qpair failed and we were unable to recover it. 00:28:41.143 [2024-12-05 21:21:49.068086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.143 [2024-12-05 21:21:49.068148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.143 [2024-12-05 21:21:49.068161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.143 [2024-12-05 21:21:49.068167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.143 [2024-12-05 21:21:49.068173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.143 [2024-12-05 21:21:49.068188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.143 qpair failed and we were unable to recover it. 00:28:41.143 [2024-12-05 21:21:49.078011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.143 [2024-12-05 21:21:49.078069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.143 [2024-12-05 21:21:49.078082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.143 [2024-12-05 21:21:49.078088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.143 [2024-12-05 21:21:49.078094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.143 [2024-12-05 21:21:49.078108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.143 qpair failed and we were unable to recover it. 00:28:41.143 [2024-12-05 21:21:49.088139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.143 [2024-12-05 21:21:49.088195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.143 [2024-12-05 21:21:49.088208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.143 [2024-12-05 21:21:49.088214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.143 [2024-12-05 21:21:49.088220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.143 [2024-12-05 21:21:49.088235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.143 qpair failed and we were unable to recover it. 00:28:41.143 [2024-12-05 21:21:49.098126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.143 [2024-12-05 21:21:49.098177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.143 [2024-12-05 21:21:49.098190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.143 [2024-12-05 21:21:49.098197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.143 [2024-12-05 21:21:49.098203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.143 [2024-12-05 21:21:49.098217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.143 qpair failed and we were unable to recover it. 00:28:41.143 [2024-12-05 21:21:49.108139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.143 [2024-12-05 21:21:49.108220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.143 [2024-12-05 21:21:49.108237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.143 [2024-12-05 21:21:49.108244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.143 [2024-12-05 21:21:49.108249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.143 [2024-12-05 21:21:49.108264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.143 qpair failed and we were unable to recover it. 00:28:41.143 [2024-12-05 21:21:49.118170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.143 [2024-12-05 21:21:49.118224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.143 [2024-12-05 21:21:49.118237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.143 [2024-12-05 21:21:49.118243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.143 [2024-12-05 21:21:49.118249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.143 [2024-12-05 21:21:49.118264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.143 qpair failed and we were unable to recover it. 00:28:41.143 [2024-12-05 21:21:49.128216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.143 [2024-12-05 21:21:49.128273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.143 [2024-12-05 21:21:49.128286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.143 [2024-12-05 21:21:49.128292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.143 [2024-12-05 21:21:49.128298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.143 [2024-12-05 21:21:49.128313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.143 qpair failed and we were unable to recover it. 00:28:41.143 [2024-12-05 21:21:49.138156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.143 [2024-12-05 21:21:49.138210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.143 [2024-12-05 21:21:49.138223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.143 [2024-12-05 21:21:49.138230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.143 [2024-12-05 21:21:49.138236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.143 [2024-12-05 21:21:49.138251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.143 qpair failed and we were unable to recover it. 00:28:41.143 [2024-12-05 21:21:49.148239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.143 [2024-12-05 21:21:49.148295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.144 [2024-12-05 21:21:49.148308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.144 [2024-12-05 21:21:49.148314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.144 [2024-12-05 21:21:49.148323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.144 [2024-12-05 21:21:49.148338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.144 qpair failed and we were unable to recover it. 00:28:41.144 [2024-12-05 21:21:49.158290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.144 [2024-12-05 21:21:49.158347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.144 [2024-12-05 21:21:49.158360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.144 [2024-12-05 21:21:49.158371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.144 [2024-12-05 21:21:49.158377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.144 [2024-12-05 21:21:49.158392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.144 qpair failed and we were unable to recover it. 00:28:41.144 [2024-12-05 21:21:49.168323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.144 [2024-12-05 21:21:49.168401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.144 [2024-12-05 21:21:49.168415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.144 [2024-12-05 21:21:49.168421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.144 [2024-12-05 21:21:49.168427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.144 [2024-12-05 21:21:49.168442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.144 qpair failed and we were unable to recover it. 00:28:41.144 [2024-12-05 21:21:49.178355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.144 [2024-12-05 21:21:49.178416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.144 [2024-12-05 21:21:49.178430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.144 [2024-12-05 21:21:49.178437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.144 [2024-12-05 21:21:49.178442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.144 [2024-12-05 21:21:49.178457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.144 qpair failed and we were unable to recover it. 00:28:41.144 [2024-12-05 21:21:49.188378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.144 [2024-12-05 21:21:49.188429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.144 [2024-12-05 21:21:49.188442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.144 [2024-12-05 21:21:49.188449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.144 [2024-12-05 21:21:49.188454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.144 [2024-12-05 21:21:49.188468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.144 qpair failed and we were unable to recover it. 00:28:41.144 [2024-12-05 21:21:49.198416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.144 [2024-12-05 21:21:49.198471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.144 [2024-12-05 21:21:49.198485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.144 [2024-12-05 21:21:49.198491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.144 [2024-12-05 21:21:49.198497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.144 [2024-12-05 21:21:49.198511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.144 qpair failed and we were unable to recover it. 00:28:41.144 [2024-12-05 21:21:49.208381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.144 [2024-12-05 21:21:49.208438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.144 [2024-12-05 21:21:49.208450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.144 [2024-12-05 21:21:49.208457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.144 [2024-12-05 21:21:49.208463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.144 [2024-12-05 21:21:49.208477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.144 qpair failed and we were unable to recover it. 00:28:41.144 [2024-12-05 21:21:49.218394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.144 [2024-12-05 21:21:49.218447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.144 [2024-12-05 21:21:49.218460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.144 [2024-12-05 21:21:49.218467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.144 [2024-12-05 21:21:49.218473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.144 [2024-12-05 21:21:49.218487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.144 qpair failed and we were unable to recover it. 00:28:41.144 [2024-12-05 21:21:49.228524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.144 [2024-12-05 21:21:49.228575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.144 [2024-12-05 21:21:49.228588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.144 [2024-12-05 21:21:49.228594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.144 [2024-12-05 21:21:49.228600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.144 [2024-12-05 21:21:49.228614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.144 qpair failed and we were unable to recover it. 00:28:41.144 [2024-12-05 21:21:49.238533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.144 [2024-12-05 21:21:49.238584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.144 [2024-12-05 21:21:49.238600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.144 [2024-12-05 21:21:49.238606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.144 [2024-12-05 21:21:49.238612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.144 [2024-12-05 21:21:49.238626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.144 qpair failed and we were unable to recover it. 00:28:41.404 [2024-12-05 21:21:49.248556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.404 [2024-12-05 21:21:49.248613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.404 [2024-12-05 21:21:49.248625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.404 [2024-12-05 21:21:49.248632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.404 [2024-12-05 21:21:49.248637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.404 [2024-12-05 21:21:49.248651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.404 qpair failed and we were unable to recover it. 00:28:41.404 [2024-12-05 21:21:49.258595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.404 [2024-12-05 21:21:49.258692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.404 [2024-12-05 21:21:49.258705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.404 [2024-12-05 21:21:49.258711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.404 [2024-12-05 21:21:49.258717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.404 [2024-12-05 21:21:49.258732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.404 qpair failed and we were unable to recover it. 00:28:41.404 [2024-12-05 21:21:49.268613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.404 [2024-12-05 21:21:49.268660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.404 [2024-12-05 21:21:49.268674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.404 [2024-12-05 21:21:49.268680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.404 [2024-12-05 21:21:49.268686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.404 [2024-12-05 21:21:49.268700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.404 qpair failed and we were unable to recover it. 00:28:41.404 [2024-12-05 21:21:49.278574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.404 [2024-12-05 21:21:49.278628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.404 [2024-12-05 21:21:49.278641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.404 [2024-12-05 21:21:49.278647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.404 [2024-12-05 21:21:49.278656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.404 [2024-12-05 21:21:49.278671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.404 qpair failed and we were unable to recover it. 00:28:41.404 [2024-12-05 21:21:49.288704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.404 [2024-12-05 21:21:49.288764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.404 [2024-12-05 21:21:49.288778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.404 [2024-12-05 21:21:49.288784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.404 [2024-12-05 21:21:49.288790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.404 [2024-12-05 21:21:49.288805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.404 qpair failed and we were unable to recover it. 00:28:41.405 [2024-12-05 21:21:49.298688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.405 [2024-12-05 21:21:49.298739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.405 [2024-12-05 21:21:49.298752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.405 [2024-12-05 21:21:49.298758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.405 [2024-12-05 21:21:49.298764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.405 [2024-12-05 21:21:49.298779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.405 qpair failed and we were unable to recover it. 00:28:41.405 [2024-12-05 21:21:49.308716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.405 [2024-12-05 21:21:49.308773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.405 [2024-12-05 21:21:49.308786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.405 [2024-12-05 21:21:49.308793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.405 [2024-12-05 21:21:49.308799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.405 [2024-12-05 21:21:49.308813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.405 qpair failed and we were unable to recover it. 00:28:41.405 [2024-12-05 21:21:49.318807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.405 [2024-12-05 21:21:49.318883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.405 [2024-12-05 21:21:49.318896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.405 [2024-12-05 21:21:49.318902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.405 [2024-12-05 21:21:49.318908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.405 [2024-12-05 21:21:49.318922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.405 qpair failed and we were unable to recover it. 00:28:41.405 [2024-12-05 21:21:49.328813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.405 [2024-12-05 21:21:49.328869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.405 [2024-12-05 21:21:49.328882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.405 [2024-12-05 21:21:49.328888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.405 [2024-12-05 21:21:49.328894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.405 [2024-12-05 21:21:49.328908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.405 qpair failed and we were unable to recover it. 00:28:41.405 [2024-12-05 21:21:49.338770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.405 [2024-12-05 21:21:49.338875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.405 [2024-12-05 21:21:49.338888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.405 [2024-12-05 21:21:49.338895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.405 [2024-12-05 21:21:49.338900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.405 [2024-12-05 21:21:49.338915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.405 qpair failed and we were unable to recover it. 00:28:41.405 [2024-12-05 21:21:49.348866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.405 [2024-12-05 21:21:49.348919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.405 [2024-12-05 21:21:49.348932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.405 [2024-12-05 21:21:49.348938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.405 [2024-12-05 21:21:49.348944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.405 [2024-12-05 21:21:49.348958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.405 qpair failed and we were unable to recover it. 00:28:41.405 [2024-12-05 21:21:49.358929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.405 [2024-12-05 21:21:49.358999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.405 [2024-12-05 21:21:49.359012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.405 [2024-12-05 21:21:49.359019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.405 [2024-12-05 21:21:49.359024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.405 [2024-12-05 21:21:49.359039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.405 qpair failed and we were unable to recover it. 00:28:41.405 [2024-12-05 21:21:49.368901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.405 [2024-12-05 21:21:49.368955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.405 [2024-12-05 21:21:49.368971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.405 [2024-12-05 21:21:49.368978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.405 [2024-12-05 21:21:49.368984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.405 [2024-12-05 21:21:49.368999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.405 qpair failed and we were unable to recover it. 00:28:41.405 [2024-12-05 21:21:49.378919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.405 [2024-12-05 21:21:49.378970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.405 [2024-12-05 21:21:49.378983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.405 [2024-12-05 21:21:49.378990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.405 [2024-12-05 21:21:49.378996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.405 [2024-12-05 21:21:49.379010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.405 qpair failed and we were unable to recover it. 00:28:41.405 [2024-12-05 21:21:49.388942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.405 [2024-12-05 21:21:49.388993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.405 [2024-12-05 21:21:49.389006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.405 [2024-12-05 21:21:49.389012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.405 [2024-12-05 21:21:49.389018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.405 [2024-12-05 21:21:49.389032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.405 qpair failed and we were unable to recover it. 00:28:41.405 [2024-12-05 21:21:49.398983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.405 [2024-12-05 21:21:49.399036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.405 [2024-12-05 21:21:49.399049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.405 [2024-12-05 21:21:49.399055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.405 [2024-12-05 21:21:49.399061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.405 [2024-12-05 21:21:49.399076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.405 qpair failed and we were unable to recover it. 00:28:41.405 [2024-12-05 21:21:49.409005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.405 [2024-12-05 21:21:49.409067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.406 [2024-12-05 21:21:49.409081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.406 [2024-12-05 21:21:49.409090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.406 [2024-12-05 21:21:49.409096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.406 [2024-12-05 21:21:49.409111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.406 qpair failed and we were unable to recover it. 00:28:41.406 [2024-12-05 21:21:49.419024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.406 [2024-12-05 21:21:49.419073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.406 [2024-12-05 21:21:49.419086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.406 [2024-12-05 21:21:49.419092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.406 [2024-12-05 21:21:49.419098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.406 [2024-12-05 21:21:49.419113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.406 qpair failed and we were unable to recover it. 00:28:41.406 [2024-12-05 21:21:49.429038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.406 [2024-12-05 21:21:49.429109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.406 [2024-12-05 21:21:49.429122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.406 [2024-12-05 21:21:49.429129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.406 [2024-12-05 21:21:49.429134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.406 [2024-12-05 21:21:49.429149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.406 qpair failed and we were unable to recover it. 00:28:41.406 [2024-12-05 21:21:49.439129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.406 [2024-12-05 21:21:49.439181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.406 [2024-12-05 21:21:49.439194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.406 [2024-12-05 21:21:49.439201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.406 [2024-12-05 21:21:49.439207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.406 [2024-12-05 21:21:49.439222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.406 qpair failed and we were unable to recover it. 00:28:41.406 [2024-12-05 21:21:49.449133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.406 [2024-12-05 21:21:49.449189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.406 [2024-12-05 21:21:49.449202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.406 [2024-12-05 21:21:49.449209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.406 [2024-12-05 21:21:49.449215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.406 [2024-12-05 21:21:49.449232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.406 qpair failed and we were unable to recover it. 00:28:41.406 [2024-12-05 21:21:49.459144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.406 [2024-12-05 21:21:49.459197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.406 [2024-12-05 21:21:49.459210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.406 [2024-12-05 21:21:49.459217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.406 [2024-12-05 21:21:49.459223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.406 [2024-12-05 21:21:49.459237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.406 qpair failed and we were unable to recover it. 00:28:41.406 [2024-12-05 21:21:49.469177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.406 [2024-12-05 21:21:49.469231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.406 [2024-12-05 21:21:49.469244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.406 [2024-12-05 21:21:49.469250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.406 [2024-12-05 21:21:49.469256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.406 [2024-12-05 21:21:49.469271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.406 qpair failed and we were unable to recover it. 00:28:41.406 [2024-12-05 21:21:49.479212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.406 [2024-12-05 21:21:49.479266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.406 [2024-12-05 21:21:49.479279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.406 [2024-12-05 21:21:49.479285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.406 [2024-12-05 21:21:49.479291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.406 [2024-12-05 21:21:49.479306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.406 qpair failed and we were unable to recover it. 00:28:41.406 [2024-12-05 21:21:49.489227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.406 [2024-12-05 21:21:49.489282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.406 [2024-12-05 21:21:49.489295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.406 [2024-12-05 21:21:49.489301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.406 [2024-12-05 21:21:49.489307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.406 [2024-12-05 21:21:49.489321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.406 qpair failed and we were unable to recover it. 00:28:41.406 [2024-12-05 21:21:49.499259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.406 [2024-12-05 21:21:49.499313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.406 [2024-12-05 21:21:49.499327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.406 [2024-12-05 21:21:49.499333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.406 [2024-12-05 21:21:49.499339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.406 [2024-12-05 21:21:49.499354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.406 qpair failed and we were unable to recover it. 00:28:41.406 [2024-12-05 21:21:49.509295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.406 [2024-12-05 21:21:49.509345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.406 [2024-12-05 21:21:49.509358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.406 [2024-12-05 21:21:49.509364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.406 [2024-12-05 21:21:49.509374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.406 [2024-12-05 21:21:49.509388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.406 qpair failed and we were unable to recover it. 00:28:41.665 [2024-12-05 21:21:49.519323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.665 [2024-12-05 21:21:49.519382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.665 [2024-12-05 21:21:49.519395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.666 [2024-12-05 21:21:49.519402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.666 [2024-12-05 21:21:49.519407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.666 [2024-12-05 21:21:49.519422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.666 qpair failed and we were unable to recover it. 00:28:41.666 [2024-12-05 21:21:49.529347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.666 [2024-12-05 21:21:49.529404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.666 [2024-12-05 21:21:49.529418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.666 [2024-12-05 21:21:49.529425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.666 [2024-12-05 21:21:49.529431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.666 [2024-12-05 21:21:49.529445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.666 qpair failed and we were unable to recover it. 00:28:41.666 [2024-12-05 21:21:49.539366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.666 [2024-12-05 21:21:49.539419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.666 [2024-12-05 21:21:49.539433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.666 [2024-12-05 21:21:49.539442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.666 [2024-12-05 21:21:49.539448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.666 [2024-12-05 21:21:49.539462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.666 qpair failed and we were unable to recover it. 00:28:41.666 [2024-12-05 21:21:49.549435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.666 [2024-12-05 21:21:49.549490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.666 [2024-12-05 21:21:49.549504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.666 [2024-12-05 21:21:49.549512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.666 [2024-12-05 21:21:49.549518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.666 [2024-12-05 21:21:49.549533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.666 qpair failed and we were unable to recover it. 00:28:41.666 [2024-12-05 21:21:49.559362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.666 [2024-12-05 21:21:49.559428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.666 [2024-12-05 21:21:49.559441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.666 [2024-12-05 21:21:49.559447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.666 [2024-12-05 21:21:49.559453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.666 [2024-12-05 21:21:49.559468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.666 qpair failed and we were unable to recover it. 00:28:41.666 [2024-12-05 21:21:49.569448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.666 [2024-12-05 21:21:49.569521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.666 [2024-12-05 21:21:49.569534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.666 [2024-12-05 21:21:49.569540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.666 [2024-12-05 21:21:49.569547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.666 [2024-12-05 21:21:49.569561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.666 qpair failed and we were unable to recover it. 00:28:41.666 [2024-12-05 21:21:49.579474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.666 [2024-12-05 21:21:49.579522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.666 [2024-12-05 21:21:49.579535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.666 [2024-12-05 21:21:49.579542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.666 [2024-12-05 21:21:49.579547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.666 [2024-12-05 21:21:49.579564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.666 qpair failed and we were unable to recover it. 00:28:41.666 [2024-12-05 21:21:49.589548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.666 [2024-12-05 21:21:49.589602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.666 [2024-12-05 21:21:49.589615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.666 [2024-12-05 21:21:49.589622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.666 [2024-12-05 21:21:49.589628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.666 [2024-12-05 21:21:49.589641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.666 qpair failed and we were unable to recover it. 00:28:41.666 [2024-12-05 21:21:49.599540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.666 [2024-12-05 21:21:49.599592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.666 [2024-12-05 21:21:49.599604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.666 [2024-12-05 21:21:49.599611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.666 [2024-12-05 21:21:49.599617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.666 [2024-12-05 21:21:49.599631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.666 qpair failed and we were unable to recover it. 00:28:41.666 [2024-12-05 21:21:49.609583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.666 [2024-12-05 21:21:49.609637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.666 [2024-12-05 21:21:49.609650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.666 [2024-12-05 21:21:49.609657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.666 [2024-12-05 21:21:49.609662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.666 [2024-12-05 21:21:49.609676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.666 qpair failed and we were unable to recover it. 00:28:41.666 [2024-12-05 21:21:49.619592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.666 [2024-12-05 21:21:49.619645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.666 [2024-12-05 21:21:49.619658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.666 [2024-12-05 21:21:49.619664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.666 [2024-12-05 21:21:49.619670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.666 [2024-12-05 21:21:49.619684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.666 qpair failed and we were unable to recover it. 00:28:41.666 [2024-12-05 21:21:49.629627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.666 [2024-12-05 21:21:49.629683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.666 [2024-12-05 21:21:49.629696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.666 [2024-12-05 21:21:49.629703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.666 [2024-12-05 21:21:49.629708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.666 [2024-12-05 21:21:49.629722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.666 qpair failed and we were unable to recover it. 00:28:41.666 [2024-12-05 21:21:49.639658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.666 [2024-12-05 21:21:49.639733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.666 [2024-12-05 21:21:49.639746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.666 [2024-12-05 21:21:49.639752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.666 [2024-12-05 21:21:49.639758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.666 [2024-12-05 21:21:49.639773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.666 qpair failed and we were unable to recover it. 00:28:41.666 [2024-12-05 21:21:49.649710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.666 [2024-12-05 21:21:49.649769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.666 [2024-12-05 21:21:49.649782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.667 [2024-12-05 21:21:49.649789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.667 [2024-12-05 21:21:49.649795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.667 [2024-12-05 21:21:49.649811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.667 qpair failed and we were unable to recover it. 00:28:41.667 [2024-12-05 21:21:49.659702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.667 [2024-12-05 21:21:49.659765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.667 [2024-12-05 21:21:49.659778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.667 [2024-12-05 21:21:49.659785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.667 [2024-12-05 21:21:49.659790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.667 [2024-12-05 21:21:49.659804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.667 qpair failed and we were unable to recover it. 00:28:41.667 [2024-12-05 21:21:49.669729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.667 [2024-12-05 21:21:49.669779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.667 [2024-12-05 21:21:49.669794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.667 [2024-12-05 21:21:49.669800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.667 [2024-12-05 21:21:49.669807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.667 [2024-12-05 21:21:49.669821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.667 qpair failed and we were unable to recover it. 00:28:41.667 [2024-12-05 21:21:49.679797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.667 [2024-12-05 21:21:49.679853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.667 [2024-12-05 21:21:49.679866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.667 [2024-12-05 21:21:49.679872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.667 [2024-12-05 21:21:49.679878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.667 [2024-12-05 21:21:49.679892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.667 qpair failed and we were unable to recover it. 00:28:41.667 [2024-12-05 21:21:49.689812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.667 [2024-12-05 21:21:49.689867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.667 [2024-12-05 21:21:49.689880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.667 [2024-12-05 21:21:49.689886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.667 [2024-12-05 21:21:49.689892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.667 [2024-12-05 21:21:49.689906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.667 qpair failed and we were unable to recover it. 00:28:41.667 [2024-12-05 21:21:49.699800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.667 [2024-12-05 21:21:49.699850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.667 [2024-12-05 21:21:49.699863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.667 [2024-12-05 21:21:49.699870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.667 [2024-12-05 21:21:49.699876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.667 [2024-12-05 21:21:49.699889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.667 qpair failed and we were unable to recover it. 00:28:41.667 [2024-12-05 21:21:49.709881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.667 [2024-12-05 21:21:49.709944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.667 [2024-12-05 21:21:49.709957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.667 [2024-12-05 21:21:49.709963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.667 [2024-12-05 21:21:49.709973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.667 [2024-12-05 21:21:49.709988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.667 qpair failed and we were unable to recover it. 00:28:41.667 [2024-12-05 21:21:49.719876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.667 [2024-12-05 21:21:49.719960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.667 [2024-12-05 21:21:49.719974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.667 [2024-12-05 21:21:49.719980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.667 [2024-12-05 21:21:49.719985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.667 [2024-12-05 21:21:49.719999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.667 qpair failed and we were unable to recover it. 00:28:41.667 [2024-12-05 21:21:49.729885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.667 [2024-12-05 21:21:49.729942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.667 [2024-12-05 21:21:49.729955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.667 [2024-12-05 21:21:49.729961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.667 [2024-12-05 21:21:49.729967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.667 [2024-12-05 21:21:49.729982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.667 qpair failed and we were unable to recover it. 00:28:41.667 [2024-12-05 21:21:49.739911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.667 [2024-12-05 21:21:49.739962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.667 [2024-12-05 21:21:49.739975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.667 [2024-12-05 21:21:49.739981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.667 [2024-12-05 21:21:49.739988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.667 [2024-12-05 21:21:49.740002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.667 qpair failed and we were unable to recover it. 00:28:41.667 [2024-12-05 21:21:49.749968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.667 [2024-12-05 21:21:49.750024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.667 [2024-12-05 21:21:49.750037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.667 [2024-12-05 21:21:49.750043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.667 [2024-12-05 21:21:49.750049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.667 [2024-12-05 21:21:49.750063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.667 qpair failed and we were unable to recover it. 00:28:41.667 [2024-12-05 21:21:49.759969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.667 [2024-12-05 21:21:49.760022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.667 [2024-12-05 21:21:49.760036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.667 [2024-12-05 21:21:49.760042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.667 [2024-12-05 21:21:49.760048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.667 [2024-12-05 21:21:49.760062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.667 qpair failed and we were unable to recover it. 00:28:41.667 [2024-12-05 21:21:49.770087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.667 [2024-12-05 21:21:49.770165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.667 [2024-12-05 21:21:49.770178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.667 [2024-12-05 21:21:49.770185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.667 [2024-12-05 21:21:49.770190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.667 [2024-12-05 21:21:49.770205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.667 qpair failed and we were unable to recover it. 00:28:41.927 [2024-12-05 21:21:49.780040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.927 [2024-12-05 21:21:49.780091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.927 [2024-12-05 21:21:49.780104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.927 [2024-12-05 21:21:49.780111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.927 [2024-12-05 21:21:49.780116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.927 [2024-12-05 21:21:49.780130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-12-05 21:21:49.790059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.927 [2024-12-05 21:21:49.790109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.927 [2024-12-05 21:21:49.790122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.927 [2024-12-05 21:21:49.790129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.927 [2024-12-05 21:21:49.790135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.927 [2024-12-05 21:21:49.790149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-12-05 21:21:49.800095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.927 [2024-12-05 21:21:49.800152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.927 [2024-12-05 21:21:49.800168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.927 [2024-12-05 21:21:49.800175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.927 [2024-12-05 21:21:49.800181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.927 [2024-12-05 21:21:49.800195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-12-05 21:21:49.810121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.927 [2024-12-05 21:21:49.810172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.927 [2024-12-05 21:21:49.810187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.927 [2024-12-05 21:21:49.810193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.927 [2024-12-05 21:21:49.810199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.927 [2024-12-05 21:21:49.810214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-12-05 21:21:49.820169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.927 [2024-12-05 21:21:49.820235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.927 [2024-12-05 21:21:49.820249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.927 [2024-12-05 21:21:49.820255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.927 [2024-12-05 21:21:49.820261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.927 [2024-12-05 21:21:49.820275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-12-05 21:21:49.830187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.927 [2024-12-05 21:21:49.830242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.927 [2024-12-05 21:21:49.830255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.927 [2024-12-05 21:21:49.830262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.927 [2024-12-05 21:21:49.830267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.927 [2024-12-05 21:21:49.830281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-12-05 21:21:49.840206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.927 [2024-12-05 21:21:49.840272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.927 [2024-12-05 21:21:49.840286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.927 [2024-12-05 21:21:49.840292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.927 [2024-12-05 21:21:49.840301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.927 [2024-12-05 21:21:49.840315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-12-05 21:21:49.850230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.927 [2024-12-05 21:21:49.850297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.927 [2024-12-05 21:21:49.850311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.927 [2024-12-05 21:21:49.850317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.927 [2024-12-05 21:21:49.850323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.927 [2024-12-05 21:21:49.850337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-12-05 21:21:49.860256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.927 [2024-12-05 21:21:49.860332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.927 [2024-12-05 21:21:49.860346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.927 [2024-12-05 21:21:49.860352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.927 [2024-12-05 21:21:49.860358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.927 [2024-12-05 21:21:49.860376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-12-05 21:21:49.870282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.927 [2024-12-05 21:21:49.870331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.927 [2024-12-05 21:21:49.870344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.927 [2024-12-05 21:21:49.870350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.927 [2024-12-05 21:21:49.870356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.927 [2024-12-05 21:21:49.870373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-12-05 21:21:49.880352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.927 [2024-12-05 21:21:49.880425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.928 [2024-12-05 21:21:49.880438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.928 [2024-12-05 21:21:49.880444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.928 [2024-12-05 21:21:49.880450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.928 [2024-12-05 21:21:49.880464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-12-05 21:21:49.890314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.928 [2024-12-05 21:21:49.890371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.928 [2024-12-05 21:21:49.890385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.928 [2024-12-05 21:21:49.890392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.928 [2024-12-05 21:21:49.890398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.928 [2024-12-05 21:21:49.890413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-12-05 21:21:49.900381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.928 [2024-12-05 21:21:49.900438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.928 [2024-12-05 21:21:49.900451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.928 [2024-12-05 21:21:49.900457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.928 [2024-12-05 21:21:49.900463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.928 [2024-12-05 21:21:49.900477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-12-05 21:21:49.910393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.928 [2024-12-05 21:21:49.910446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.928 [2024-12-05 21:21:49.910459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.928 [2024-12-05 21:21:49.910465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.928 [2024-12-05 21:21:49.910471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.928 [2024-12-05 21:21:49.910486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-12-05 21:21:49.920438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.928 [2024-12-05 21:21:49.920493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.928 [2024-12-05 21:21:49.920506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.928 [2024-12-05 21:21:49.920512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.928 [2024-12-05 21:21:49.920518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.928 [2024-12-05 21:21:49.920532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-12-05 21:21:49.930456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.928 [2024-12-05 21:21:49.930508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.928 [2024-12-05 21:21:49.930524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.928 [2024-12-05 21:21:49.930530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.928 [2024-12-05 21:21:49.930536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.928 [2024-12-05 21:21:49.930550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-12-05 21:21:49.940497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.928 [2024-12-05 21:21:49.940577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.928 [2024-12-05 21:21:49.940590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.928 [2024-12-05 21:21:49.940597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.928 [2024-12-05 21:21:49.940602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.928 [2024-12-05 21:21:49.940616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-12-05 21:21:49.950541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.928 [2024-12-05 21:21:49.950591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.928 [2024-12-05 21:21:49.950603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.928 [2024-12-05 21:21:49.950609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.928 [2024-12-05 21:21:49.950615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.928 [2024-12-05 21:21:49.950629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-12-05 21:21:49.960530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.928 [2024-12-05 21:21:49.960585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.928 [2024-12-05 21:21:49.960597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.928 [2024-12-05 21:21:49.960603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.928 [2024-12-05 21:21:49.960609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.928 [2024-12-05 21:21:49.960624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-12-05 21:21:49.970587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.928 [2024-12-05 21:21:49.970656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.928 [2024-12-05 21:21:49.970669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.928 [2024-12-05 21:21:49.970678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.928 [2024-12-05 21:21:49.970684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.928 [2024-12-05 21:21:49.970699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-12-05 21:21:49.980626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.928 [2024-12-05 21:21:49.980687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.928 [2024-12-05 21:21:49.980700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.928 [2024-12-05 21:21:49.980707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.928 [2024-12-05 21:21:49.980713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.928 [2024-12-05 21:21:49.980727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-12-05 21:21:49.990627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.928 [2024-12-05 21:21:49.990680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.928 [2024-12-05 21:21:49.990692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.928 [2024-12-05 21:21:49.990699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.928 [2024-12-05 21:21:49.990704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.928 [2024-12-05 21:21:49.990718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-12-05 21:21:50.000665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.928 [2024-12-05 21:21:50.000718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.928 [2024-12-05 21:21:50.000731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.928 [2024-12-05 21:21:50.000737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.928 [2024-12-05 21:21:50.000743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.928 [2024-12-05 21:21:50.000757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-12-05 21:21:50.010755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.928 [2024-12-05 21:21:50.010836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.928 [2024-12-05 21:21:50.010853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.929 [2024-12-05 21:21:50.010860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.929 [2024-12-05 21:21:50.010866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.929 [2024-12-05 21:21:50.010885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-12-05 21:21:50.020815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.929 [2024-12-05 21:21:50.020876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.929 [2024-12-05 21:21:50.020893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.929 [2024-12-05 21:21:50.020901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.929 [2024-12-05 21:21:50.020907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.929 [2024-12-05 21:21:50.020924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-12-05 21:21:50.030845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.929 [2024-12-05 21:21:50.030944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.929 [2024-12-05 21:21:50.030957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.929 [2024-12-05 21:21:50.030965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.929 [2024-12-05 21:21:50.030971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:41.929 [2024-12-05 21:21:50.030986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:41.929 qpair failed and we were unable to recover it. 00:28:42.189 [2024-12-05 21:21:50.040881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.189 [2024-12-05 21:21:50.040950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.189 [2024-12-05 21:21:50.040966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.189 [2024-12-05 21:21:50.040976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.189 [2024-12-05 21:21:50.040983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.189 [2024-12-05 21:21:50.040999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.189 qpair failed and we were unable to recover it. 00:28:42.189 [2024-12-05 21:21:50.050944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.189 [2024-12-05 21:21:50.051010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.189 [2024-12-05 21:21:50.051033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.189 [2024-12-05 21:21:50.051040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.189 [2024-12-05 21:21:50.051046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.189 [2024-12-05 21:21:50.051066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.189 qpair failed and we were unable to recover it. 00:28:42.189 [2024-12-05 21:21:50.060880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.189 [2024-12-05 21:21:50.060939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.189 [2024-12-05 21:21:50.060953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.189 [2024-12-05 21:21:50.060960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.189 [2024-12-05 21:21:50.060966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.189 [2024-12-05 21:21:50.060981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.189 qpair failed and we were unable to recover it. 00:28:42.189 [2024-12-05 21:21:50.070911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.189 [2024-12-05 21:21:50.070968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.189 [2024-12-05 21:21:50.070984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.189 [2024-12-05 21:21:50.070991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.189 [2024-12-05 21:21:50.070997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.189 [2024-12-05 21:21:50.071013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.189 qpair failed and we were unable to recover it. 00:28:42.189 [2024-12-05 21:21:50.080945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.189 [2024-12-05 21:21:50.081002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.190 [2024-12-05 21:21:50.081017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.190 [2024-12-05 21:21:50.081024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.190 [2024-12-05 21:21:50.081031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.190 [2024-12-05 21:21:50.081046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.190 qpair failed and we were unable to recover it. 00:28:42.190 [2024-12-05 21:21:50.090949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.190 [2024-12-05 21:21:50.091014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.190 [2024-12-05 21:21:50.091029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.190 [2024-12-05 21:21:50.091036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.190 [2024-12-05 21:21:50.091042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.190 [2024-12-05 21:21:50.091059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.190 qpair failed and we were unable to recover it. 00:28:42.190 [2024-12-05 21:21:50.100956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.190 [2024-12-05 21:21:50.101074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.190 [2024-12-05 21:21:50.101106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.190 [2024-12-05 21:21:50.101120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.190 [2024-12-05 21:21:50.101135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.190 [2024-12-05 21:21:50.101164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.190 qpair failed and we were unable to recover it. 00:28:42.190 [2024-12-05 21:21:50.111047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.190 [2024-12-05 21:21:50.111105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.190 [2024-12-05 21:21:50.111120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.190 [2024-12-05 21:21:50.111127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.190 [2024-12-05 21:21:50.111133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.190 [2024-12-05 21:21:50.111148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.190 qpair failed and we were unable to recover it. 00:28:42.190 [2024-12-05 21:21:50.121046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.190 [2024-12-05 21:21:50.121110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.190 [2024-12-05 21:21:50.121124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.190 [2024-12-05 21:21:50.121131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.190 [2024-12-05 21:21:50.121137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.190 [2024-12-05 21:21:50.121152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.190 qpair failed and we were unable to recover it. 00:28:42.190 [2024-12-05 21:21:50.131059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.190 [2024-12-05 21:21:50.131132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.190 [2024-12-05 21:21:50.131146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.190 [2024-12-05 21:21:50.131153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.190 [2024-12-05 21:21:50.131158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.190 [2024-12-05 21:21:50.131173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.190 qpair failed and we were unable to recover it. 00:28:42.190 [2024-12-05 21:21:50.141063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.190 [2024-12-05 21:21:50.141116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.190 [2024-12-05 21:21:50.141130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.190 [2024-12-05 21:21:50.141137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.190 [2024-12-05 21:21:50.141143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.190 [2024-12-05 21:21:50.141161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.190 qpair failed and we were unable to recover it. 00:28:42.190 [2024-12-05 21:21:50.151141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.190 [2024-12-05 21:21:50.151196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.190 [2024-12-05 21:21:50.151210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.190 [2024-12-05 21:21:50.151217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.190 [2024-12-05 21:21:50.151224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.190 [2024-12-05 21:21:50.151239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.190 qpair failed and we were unable to recover it. 00:28:42.190 [2024-12-05 21:21:50.161162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.190 [2024-12-05 21:21:50.161218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.190 [2024-12-05 21:21:50.161230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.190 [2024-12-05 21:21:50.161236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.190 [2024-12-05 21:21:50.161243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.190 [2024-12-05 21:21:50.161257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.190 qpair failed and we were unable to recover it. 00:28:42.190 [2024-12-05 21:21:50.171176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.190 [2024-12-05 21:21:50.171241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.190 [2024-12-05 21:21:50.171254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.190 [2024-12-05 21:21:50.171261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.190 [2024-12-05 21:21:50.171266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.190 [2024-12-05 21:21:50.171281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.190 qpair failed and we were unable to recover it. 00:28:42.190 [2024-12-05 21:21:50.181234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.190 [2024-12-05 21:21:50.181290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.190 [2024-12-05 21:21:50.181303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.190 [2024-12-05 21:21:50.181310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.190 [2024-12-05 21:21:50.181315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.190 [2024-12-05 21:21:50.181330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.190 qpair failed and we were unable to recover it. 00:28:42.190 [2024-12-05 21:21:50.191240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.190 [2024-12-05 21:21:50.191299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.190 [2024-12-05 21:21:50.191313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.190 [2024-12-05 21:21:50.191320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.190 [2024-12-05 21:21:50.191326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.190 [2024-12-05 21:21:50.191340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.190 qpair failed and we were unable to recover it. 00:28:42.190 [2024-12-05 21:21:50.201264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.190 [2024-12-05 21:21:50.201347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.190 [2024-12-05 21:21:50.201361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.190 [2024-12-05 21:21:50.201372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.190 [2024-12-05 21:21:50.201378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.190 [2024-12-05 21:21:50.201393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.190 qpair failed and we were unable to recover it. 00:28:42.190 [2024-12-05 21:21:50.211291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.190 [2024-12-05 21:21:50.211345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.190 [2024-12-05 21:21:50.211358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.191 [2024-12-05 21:21:50.211365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.191 [2024-12-05 21:21:50.211374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.191 [2024-12-05 21:21:50.211389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.191 qpair failed and we were unable to recover it. 00:28:42.191 [2024-12-05 21:21:50.221286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.191 [2024-12-05 21:21:50.221339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.191 [2024-12-05 21:21:50.221353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.191 [2024-12-05 21:21:50.221362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.191 [2024-12-05 21:21:50.221373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.191 [2024-12-05 21:21:50.221388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.191 qpair failed and we were unable to recover it. 00:28:42.191 [2024-12-05 21:21:50.231320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.191 [2024-12-05 21:21:50.231377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.191 [2024-12-05 21:21:50.231393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.191 [2024-12-05 21:21:50.231400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.191 [2024-12-05 21:21:50.231406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.191 [2024-12-05 21:21:50.231421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.191 qpair failed and we were unable to recover it. 00:28:42.191 [2024-12-05 21:21:50.241356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.191 [2024-12-05 21:21:50.241431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.191 [2024-12-05 21:21:50.241445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.191 [2024-12-05 21:21:50.241451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.191 [2024-12-05 21:21:50.241457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.191 [2024-12-05 21:21:50.241472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.191 qpair failed and we were unable to recover it. 00:28:42.191 [2024-12-05 21:21:50.251394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.191 [2024-12-05 21:21:50.251454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.191 [2024-12-05 21:21:50.251468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.191 [2024-12-05 21:21:50.251474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.191 [2024-12-05 21:21:50.251480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.191 [2024-12-05 21:21:50.251495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.191 qpair failed and we were unable to recover it. 00:28:42.191 [2024-12-05 21:21:50.261381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.191 [2024-12-05 21:21:50.261460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.191 [2024-12-05 21:21:50.261474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.191 [2024-12-05 21:21:50.261482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.191 [2024-12-05 21:21:50.261488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.191 [2024-12-05 21:21:50.261503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.191 qpair failed and we were unable to recover it. 00:28:42.191 [2024-12-05 21:21:50.271475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.191 [2024-12-05 21:21:50.271539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.191 [2024-12-05 21:21:50.271552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.191 [2024-12-05 21:21:50.271559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.191 [2024-12-05 21:21:50.271568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.191 [2024-12-05 21:21:50.271582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.191 qpair failed and we were unable to recover it. 00:28:42.191 [2024-12-05 21:21:50.281462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.191 [2024-12-05 21:21:50.281517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.191 [2024-12-05 21:21:50.281530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.191 [2024-12-05 21:21:50.281537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.191 [2024-12-05 21:21:50.281543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.191 [2024-12-05 21:21:50.281556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.191 qpair failed and we were unable to recover it. 00:28:42.191 [2024-12-05 21:21:50.291493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.191 [2024-12-05 21:21:50.291550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.191 [2024-12-05 21:21:50.291564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.191 [2024-12-05 21:21:50.291571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.191 [2024-12-05 21:21:50.291577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.191 [2024-12-05 21:21:50.291591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.191 qpair failed and we were unable to recover it. 00:28:42.450 [2024-12-05 21:21:50.301526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.451 [2024-12-05 21:21:50.301586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.451 [2024-12-05 21:21:50.301599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.451 [2024-12-05 21:21:50.301605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.451 [2024-12-05 21:21:50.301611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.451 [2024-12-05 21:21:50.301626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.451 qpair failed and we were unable to recover it. 00:28:42.451 [2024-12-05 21:21:50.311535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.451 [2024-12-05 21:21:50.311589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.451 [2024-12-05 21:21:50.311602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.451 [2024-12-05 21:21:50.311608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.451 [2024-12-05 21:21:50.311614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.451 [2024-12-05 21:21:50.311628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.451 qpair failed and we were unable to recover it. 00:28:42.451 [2024-12-05 21:21:50.321570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.451 [2024-12-05 21:21:50.321622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.451 [2024-12-05 21:21:50.321636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.451 [2024-12-05 21:21:50.321643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.451 [2024-12-05 21:21:50.321648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.451 [2024-12-05 21:21:50.321662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.451 qpair failed and we were unable to recover it. 00:28:42.451 [2024-12-05 21:21:50.331627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.451 [2024-12-05 21:21:50.331684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.451 [2024-12-05 21:21:50.331696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.451 [2024-12-05 21:21:50.331703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.451 [2024-12-05 21:21:50.331708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.451 [2024-12-05 21:21:50.331722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.451 qpair failed and we were unable to recover it. 00:28:42.451 [2024-12-05 21:21:50.341632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.451 [2024-12-05 21:21:50.341685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.451 [2024-12-05 21:21:50.341698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.451 [2024-12-05 21:21:50.341705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.451 [2024-12-05 21:21:50.341710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.451 [2024-12-05 21:21:50.341724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.451 qpair failed and we were unable to recover it. 00:28:42.451 [2024-12-05 21:21:50.351648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.451 [2024-12-05 21:21:50.351701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.451 [2024-12-05 21:21:50.351714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.451 [2024-12-05 21:21:50.351720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.451 [2024-12-05 21:21:50.351726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.451 [2024-12-05 21:21:50.351740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.451 qpair failed and we were unable to recover it. 00:28:42.451 [2024-12-05 21:21:50.361678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.451 [2024-12-05 21:21:50.361732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.451 [2024-12-05 21:21:50.361748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.451 [2024-12-05 21:21:50.361754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.451 [2024-12-05 21:21:50.361760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.451 [2024-12-05 21:21:50.361775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.451 qpair failed and we were unable to recover it. 00:28:42.451 [2024-12-05 21:21:50.371739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.451 [2024-12-05 21:21:50.371804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.451 [2024-12-05 21:21:50.371817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.451 [2024-12-05 21:21:50.371824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.451 [2024-12-05 21:21:50.371830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.451 [2024-12-05 21:21:50.371843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.451 qpair failed and we were unable to recover it. 00:28:42.451 [2024-12-05 21:21:50.381682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.451 [2024-12-05 21:21:50.381736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.451 [2024-12-05 21:21:50.381749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.451 [2024-12-05 21:21:50.381756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.451 [2024-12-05 21:21:50.381761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.451 [2024-12-05 21:21:50.381776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.451 qpair failed and we were unable to recover it. 00:28:42.451 [2024-12-05 21:21:50.391769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.451 [2024-12-05 21:21:50.391823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.451 [2024-12-05 21:21:50.391836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.451 [2024-12-05 21:21:50.391843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.451 [2024-12-05 21:21:50.391849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.451 [2024-12-05 21:21:50.391863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.451 qpair failed and we were unable to recover it. 00:28:42.451 [2024-12-05 21:21:50.401862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.451 [2024-12-05 21:21:50.401942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.451 [2024-12-05 21:21:50.401956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.451 [2024-12-05 21:21:50.401962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.451 [2024-12-05 21:21:50.401972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.451 [2024-12-05 21:21:50.401987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.451 qpair failed and we were unable to recover it. 00:28:42.451 [2024-12-05 21:21:50.411855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.451 [2024-12-05 21:21:50.411914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.451 [2024-12-05 21:21:50.411926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.451 [2024-12-05 21:21:50.411933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.451 [2024-12-05 21:21:50.411939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.451 [2024-12-05 21:21:50.411953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.451 qpair failed and we were unable to recover it. 00:28:42.451 [2024-12-05 21:21:50.421798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.451 [2024-12-05 21:21:50.421855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.451 [2024-12-05 21:21:50.421868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.451 [2024-12-05 21:21:50.421874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.451 [2024-12-05 21:21:50.421880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.451 [2024-12-05 21:21:50.421895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.451 qpair failed and we were unable to recover it. 00:28:42.452 [2024-12-05 21:21:50.431822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.452 [2024-12-05 21:21:50.431877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.452 [2024-12-05 21:21:50.431890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.452 [2024-12-05 21:21:50.431897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.452 [2024-12-05 21:21:50.431902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.452 [2024-12-05 21:21:50.431917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.452 qpair failed and we were unable to recover it. 00:28:42.452 [2024-12-05 21:21:50.441996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.452 [2024-12-05 21:21:50.442069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.452 [2024-12-05 21:21:50.442082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.452 [2024-12-05 21:21:50.442088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.452 [2024-12-05 21:21:50.442094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.452 [2024-12-05 21:21:50.442109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.452 qpair failed and we were unable to recover it. 00:28:42.452 [2024-12-05 21:21:50.451937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.452 [2024-12-05 21:21:50.451995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.452 [2024-12-05 21:21:50.452009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.452 [2024-12-05 21:21:50.452015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.452 [2024-12-05 21:21:50.452022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.452 [2024-12-05 21:21:50.452036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.452 qpair failed and we were unable to recover it. 00:28:42.452 [2024-12-05 21:21:50.461963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.452 [2024-12-05 21:21:50.462012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.452 [2024-12-05 21:21:50.462026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.452 [2024-12-05 21:21:50.462032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.452 [2024-12-05 21:21:50.462038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.452 [2024-12-05 21:21:50.462052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.452 qpair failed and we were unable to recover it. 00:28:42.452 [2024-12-05 21:21:50.472031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.452 [2024-12-05 21:21:50.472081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.452 [2024-12-05 21:21:50.472094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.452 [2024-12-05 21:21:50.472100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.452 [2024-12-05 21:21:50.472106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.452 [2024-12-05 21:21:50.472121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.452 qpair failed and we were unable to recover it. 00:28:42.452 [2024-12-05 21:21:50.482053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.452 [2024-12-05 21:21:50.482119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.452 [2024-12-05 21:21:50.482133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.452 [2024-12-05 21:21:50.482139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.452 [2024-12-05 21:21:50.482146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.452 [2024-12-05 21:21:50.482161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.452 qpair failed and we were unable to recover it. 00:28:42.452 [2024-12-05 21:21:50.492091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.452 [2024-12-05 21:21:50.492147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.452 [2024-12-05 21:21:50.492163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.452 [2024-12-05 21:21:50.492170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.452 [2024-12-05 21:21:50.492176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.452 [2024-12-05 21:21:50.492190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.452 qpair failed and we were unable to recover it. 00:28:42.452 [2024-12-05 21:21:50.502018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.452 [2024-12-05 21:21:50.502070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.452 [2024-12-05 21:21:50.502084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.452 [2024-12-05 21:21:50.502090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.452 [2024-12-05 21:21:50.502096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.452 [2024-12-05 21:21:50.502111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.452 qpair failed and we were unable to recover it. 00:28:42.452 [2024-12-05 21:21:50.512111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.452 [2024-12-05 21:21:50.512168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.452 [2024-12-05 21:21:50.512182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.452 [2024-12-05 21:21:50.512189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.452 [2024-12-05 21:21:50.512195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.452 [2024-12-05 21:21:50.512211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.452 qpair failed and we were unable to recover it. 00:28:42.452 [2024-12-05 21:21:50.522146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.452 [2024-12-05 21:21:50.522199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.452 [2024-12-05 21:21:50.522213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.452 [2024-12-05 21:21:50.522219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.452 [2024-12-05 21:21:50.522225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.452 [2024-12-05 21:21:50.522239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.452 qpair failed and we were unable to recover it. 00:28:42.452 [2024-12-05 21:21:50.532178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.452 [2024-12-05 21:21:50.532247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.452 [2024-12-05 21:21:50.532260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.452 [2024-12-05 21:21:50.532270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.452 [2024-12-05 21:21:50.532276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.452 [2024-12-05 21:21:50.532290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.452 qpair failed and we were unable to recover it. 00:28:42.452 [2024-12-05 21:21:50.542218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.452 [2024-12-05 21:21:50.542275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.452 [2024-12-05 21:21:50.542288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.452 [2024-12-05 21:21:50.542295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.452 [2024-12-05 21:21:50.542301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.452 [2024-12-05 21:21:50.542315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.452 qpair failed and we were unable to recover it. 00:28:42.452 [2024-12-05 21:21:50.552248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.452 [2024-12-05 21:21:50.552345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.452 [2024-12-05 21:21:50.552360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.452 [2024-12-05 21:21:50.552370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.452 [2024-12-05 21:21:50.552376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.452 [2024-12-05 21:21:50.552391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.452 qpair failed and we were unable to recover it. 00:28:42.712 [2024-12-05 21:21:50.562273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.712 [2024-12-05 21:21:50.562333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.712 [2024-12-05 21:21:50.562346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.712 [2024-12-05 21:21:50.562353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.712 [2024-12-05 21:21:50.562358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.712 [2024-12-05 21:21:50.562377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.712 qpair failed and we were unable to recover it. 00:28:42.712 [2024-12-05 21:21:50.572299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.712 [2024-12-05 21:21:50.572355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.712 [2024-12-05 21:21:50.572372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.712 [2024-12-05 21:21:50.572379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.712 [2024-12-05 21:21:50.572385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.712 [2024-12-05 21:21:50.572403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.712 qpair failed and we were unable to recover it. 00:28:42.712 [2024-12-05 21:21:50.582266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.712 [2024-12-05 21:21:50.582322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.712 [2024-12-05 21:21:50.582335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.712 [2024-12-05 21:21:50.582341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.712 [2024-12-05 21:21:50.582347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.712 [2024-12-05 21:21:50.582361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.712 qpair failed and we were unable to recover it. 00:28:42.712 [2024-12-05 21:21:50.592308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.712 [2024-12-05 21:21:50.592389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.712 [2024-12-05 21:21:50.592402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.713 [2024-12-05 21:21:50.592409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.713 [2024-12-05 21:21:50.592414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.713 [2024-12-05 21:21:50.592429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.713 qpair failed and we were unable to recover it. 00:28:42.713 [2024-12-05 21:21:50.602401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.713 [2024-12-05 21:21:50.602463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.713 [2024-12-05 21:21:50.602476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.713 [2024-12-05 21:21:50.602483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.713 [2024-12-05 21:21:50.602488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.713 [2024-12-05 21:21:50.602502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.713 qpair failed and we were unable to recover it. 00:28:42.713 [2024-12-05 21:21:50.612461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.713 [2024-12-05 21:21:50.612517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.713 [2024-12-05 21:21:50.612531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.713 [2024-12-05 21:21:50.612537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.713 [2024-12-05 21:21:50.612543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.713 [2024-12-05 21:21:50.612557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.713 qpair failed and we were unable to recover it. 00:28:42.713 [2024-12-05 21:21:50.622443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.713 [2024-12-05 21:21:50.622536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.713 [2024-12-05 21:21:50.622548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.713 [2024-12-05 21:21:50.622555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.713 [2024-12-05 21:21:50.622561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.713 [2024-12-05 21:21:50.622575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.713 qpair failed and we were unable to recover it. 00:28:42.713 [2024-12-05 21:21:50.632420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.713 [2024-12-05 21:21:50.632493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.713 [2024-12-05 21:21:50.632506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.713 [2024-12-05 21:21:50.632513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.713 [2024-12-05 21:21:50.632519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.713 [2024-12-05 21:21:50.632535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.713 qpair failed and we were unable to recover it. 00:28:42.713 [2024-12-05 21:21:50.642521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.713 [2024-12-05 21:21:50.642594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.713 [2024-12-05 21:21:50.642607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.713 [2024-12-05 21:21:50.642613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.713 [2024-12-05 21:21:50.642619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.713 [2024-12-05 21:21:50.642634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.713 qpair failed and we were unable to recover it. 00:28:42.713 [2024-12-05 21:21:50.652524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.713 [2024-12-05 21:21:50.652631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.713 [2024-12-05 21:21:50.652645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.713 [2024-12-05 21:21:50.652652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.713 [2024-12-05 21:21:50.652658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.713 [2024-12-05 21:21:50.652672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.713 qpair failed and we were unable to recover it. 00:28:42.713 [2024-12-05 21:21:50.662556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.713 [2024-12-05 21:21:50.662604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.713 [2024-12-05 21:21:50.662617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.713 [2024-12-05 21:21:50.662626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.713 [2024-12-05 21:21:50.662632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.713 [2024-12-05 21:21:50.662647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.713 qpair failed and we were unable to recover it. 00:28:42.713 [2024-12-05 21:21:50.672599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.713 [2024-12-05 21:21:50.672649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.713 [2024-12-05 21:21:50.672662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.713 [2024-12-05 21:21:50.672669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.713 [2024-12-05 21:21:50.672675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.713 [2024-12-05 21:21:50.672689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.713 qpair failed and we were unable to recover it. 00:28:42.713 [2024-12-05 21:21:50.682592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.713 [2024-12-05 21:21:50.682647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.713 [2024-12-05 21:21:50.682660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.713 [2024-12-05 21:21:50.682666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.713 [2024-12-05 21:21:50.682672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.713 [2024-12-05 21:21:50.682686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.713 qpair failed and we were unable to recover it. 00:28:42.713 [2024-12-05 21:21:50.692614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.713 [2024-12-05 21:21:50.692670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.713 [2024-12-05 21:21:50.692683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.713 [2024-12-05 21:21:50.692689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.713 [2024-12-05 21:21:50.692695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.713 [2024-12-05 21:21:50.692710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.713 qpair failed and we were unable to recover it. 00:28:42.713 [2024-12-05 21:21:50.702664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.713 [2024-12-05 21:21:50.702718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.713 [2024-12-05 21:21:50.702731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.713 [2024-12-05 21:21:50.702737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.713 [2024-12-05 21:21:50.702743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.713 [2024-12-05 21:21:50.702761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.713 qpair failed and we were unable to recover it. 00:28:42.713 [2024-12-05 21:21:50.712694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.713 [2024-12-05 21:21:50.712750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.713 [2024-12-05 21:21:50.712763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.713 [2024-12-05 21:21:50.712769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.713 [2024-12-05 21:21:50.712775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.713 [2024-12-05 21:21:50.712789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.713 qpair failed and we were unable to recover it. 00:28:42.713 [2024-12-05 21:21:50.722636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.713 [2024-12-05 21:21:50.722709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.713 [2024-12-05 21:21:50.722722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.713 [2024-12-05 21:21:50.722729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.714 [2024-12-05 21:21:50.722735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.714 [2024-12-05 21:21:50.722748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.714 qpair failed and we were unable to recover it. 00:28:42.714 [2024-12-05 21:21:50.732792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.714 [2024-12-05 21:21:50.732848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.714 [2024-12-05 21:21:50.732861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.714 [2024-12-05 21:21:50.732867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.714 [2024-12-05 21:21:50.732873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.714 [2024-12-05 21:21:50.732887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.714 qpair failed and we were unable to recover it. 00:28:42.714 [2024-12-05 21:21:50.742758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.714 [2024-12-05 21:21:50.742812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.714 [2024-12-05 21:21:50.742825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.714 [2024-12-05 21:21:50.742831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.714 [2024-12-05 21:21:50.742837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.714 [2024-12-05 21:21:50.742851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.714 qpair failed and we were unable to recover it. 00:28:42.714 [2024-12-05 21:21:50.752821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.714 [2024-12-05 21:21:50.752880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.714 [2024-12-05 21:21:50.752893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.714 [2024-12-05 21:21:50.752900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.714 [2024-12-05 21:21:50.752905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.714 [2024-12-05 21:21:50.752920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.714 qpair failed and we were unable to recover it. 00:28:42.714 [2024-12-05 21:21:50.762821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.714 [2024-12-05 21:21:50.762877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.714 [2024-12-05 21:21:50.762890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.714 [2024-12-05 21:21:50.762896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.714 [2024-12-05 21:21:50.762902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.714 [2024-12-05 21:21:50.762916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.714 qpair failed and we were unable to recover it. 00:28:42.714 [2024-12-05 21:21:50.772878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.714 [2024-12-05 21:21:50.772935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.714 [2024-12-05 21:21:50.772949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.714 [2024-12-05 21:21:50.772955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.714 [2024-12-05 21:21:50.772962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.714 [2024-12-05 21:21:50.772975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.714 qpair failed and we were unable to recover it. 00:28:42.714 [2024-12-05 21:21:50.782853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.714 [2024-12-05 21:21:50.782903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.714 [2024-12-05 21:21:50.782916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.714 [2024-12-05 21:21:50.782922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.714 [2024-12-05 21:21:50.782928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.714 [2024-12-05 21:21:50.782942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.714 qpair failed and we were unable to recover it. 00:28:42.714 [2024-12-05 21:21:50.792898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.714 [2024-12-05 21:21:50.792986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.714 [2024-12-05 21:21:50.793002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.714 [2024-12-05 21:21:50.793008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.714 [2024-12-05 21:21:50.793014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.714 [2024-12-05 21:21:50.793028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.714 qpair failed and we were unable to recover it. 00:28:42.714 [2024-12-05 21:21:50.802939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.714 [2024-12-05 21:21:50.803010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.714 [2024-12-05 21:21:50.803024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.714 [2024-12-05 21:21:50.803030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.714 [2024-12-05 21:21:50.803036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.714 [2024-12-05 21:21:50.803050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.714 qpair failed and we were unable to recover it. 00:28:42.714 [2024-12-05 21:21:50.812956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.714 [2024-12-05 21:21:50.813005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.714 [2024-12-05 21:21:50.813019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.714 [2024-12-05 21:21:50.813025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.714 [2024-12-05 21:21:50.813031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.714 [2024-12-05 21:21:50.813045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.714 qpair failed and we were unable to recover it. 00:28:42.973 [2024-12-05 21:21:50.823023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.973 [2024-12-05 21:21:50.823073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.973 [2024-12-05 21:21:50.823085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.973 [2024-12-05 21:21:50.823091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.973 [2024-12-05 21:21:50.823097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.973 [2024-12-05 21:21:50.823112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.973 qpair failed and we were unable to recover it. 00:28:42.973 [2024-12-05 21:21:50.833014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.973 [2024-12-05 21:21:50.833089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.973 [2024-12-05 21:21:50.833102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.973 [2024-12-05 21:21:50.833109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.973 [2024-12-05 21:21:50.833118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.973 [2024-12-05 21:21:50.833133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.973 qpair failed and we were unable to recover it. 00:28:42.973 [2024-12-05 21:21:50.843062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.973 [2024-12-05 21:21:50.843127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.973 [2024-12-05 21:21:50.843140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.973 [2024-12-05 21:21:50.843147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.973 [2024-12-05 21:21:50.843153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.973 [2024-12-05 21:21:50.843167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.973 qpair failed and we were unable to recover it. 00:28:42.973 [2024-12-05 21:21:50.853100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.974 [2024-12-05 21:21:50.853156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.974 [2024-12-05 21:21:50.853169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.974 [2024-12-05 21:21:50.853175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.974 [2024-12-05 21:21:50.853181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.974 [2024-12-05 21:21:50.853196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.974 qpair failed and we were unable to recover it. 00:28:42.974 [2024-12-05 21:21:50.863100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.974 [2024-12-05 21:21:50.863150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.974 [2024-12-05 21:21:50.863163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.974 [2024-12-05 21:21:50.863170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.974 [2024-12-05 21:21:50.863175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.974 [2024-12-05 21:21:50.863189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.974 qpair failed and we were unable to recover it. 00:28:42.974 [2024-12-05 21:21:50.873148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.974 [2024-12-05 21:21:50.873201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.974 [2024-12-05 21:21:50.873214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.974 [2024-12-05 21:21:50.873220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.974 [2024-12-05 21:21:50.873226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.974 [2024-12-05 21:21:50.873241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.974 qpair failed and we were unable to recover it. 00:28:42.974 [2024-12-05 21:21:50.883211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.974 [2024-12-05 21:21:50.883264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.974 [2024-12-05 21:21:50.883277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.974 [2024-12-05 21:21:50.883284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.974 [2024-12-05 21:21:50.883289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.974 [2024-12-05 21:21:50.883304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.974 qpair failed and we were unable to recover it. 00:28:42.974 [2024-12-05 21:21:50.893207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.974 [2024-12-05 21:21:50.893261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.974 [2024-12-05 21:21:50.893274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.974 [2024-12-05 21:21:50.893280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.974 [2024-12-05 21:21:50.893286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.974 [2024-12-05 21:21:50.893301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.974 qpair failed and we were unable to recover it. 00:28:42.974 [2024-12-05 21:21:50.903238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.974 [2024-12-05 21:21:50.903323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.974 [2024-12-05 21:21:50.903336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.974 [2024-12-05 21:21:50.903343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.974 [2024-12-05 21:21:50.903348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.974 [2024-12-05 21:21:50.903363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.974 qpair failed and we were unable to recover it. 00:28:42.974 [2024-12-05 21:21:50.913272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.974 [2024-12-05 21:21:50.913365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.974 [2024-12-05 21:21:50.913382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.974 [2024-12-05 21:21:50.913388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.974 [2024-12-05 21:21:50.913394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.974 [2024-12-05 21:21:50.913408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.974 qpair failed and we were unable to recover it. 00:28:42.974 [2024-12-05 21:21:50.923297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.974 [2024-12-05 21:21:50.923353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.974 [2024-12-05 21:21:50.923379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.974 [2024-12-05 21:21:50.923385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.974 [2024-12-05 21:21:50.923391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.974 [2024-12-05 21:21:50.923405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.974 qpair failed and we were unable to recover it. 00:28:42.974 [2024-12-05 21:21:50.933318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.974 [2024-12-05 21:21:50.933404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.974 [2024-12-05 21:21:50.933417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.974 [2024-12-05 21:21:50.933424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.974 [2024-12-05 21:21:50.933429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.974 [2024-12-05 21:21:50.933443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.974 qpair failed and we were unable to recover it. 00:28:42.974 [2024-12-05 21:21:50.943349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.974 [2024-12-05 21:21:50.943405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.974 [2024-12-05 21:21:50.943418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.974 [2024-12-05 21:21:50.943424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.974 [2024-12-05 21:21:50.943430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.974 [2024-12-05 21:21:50.943444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.974 qpair failed and we were unable to recover it. 00:28:42.974 [2024-12-05 21:21:50.953377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.974 [2024-12-05 21:21:50.953427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.974 [2024-12-05 21:21:50.953440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.974 [2024-12-05 21:21:50.953446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.974 [2024-12-05 21:21:50.953452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.974 [2024-12-05 21:21:50.953466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.974 qpair failed and we were unable to recover it. 00:28:42.974 [2024-12-05 21:21:50.963413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.974 [2024-12-05 21:21:50.963470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.974 [2024-12-05 21:21:50.963483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.974 [2024-12-05 21:21:50.963490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.974 [2024-12-05 21:21:50.963498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.974 [2024-12-05 21:21:50.963513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.974 qpair failed and we were unable to recover it. 00:28:42.974 [2024-12-05 21:21:50.973430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.974 [2024-12-05 21:21:50.973479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.974 [2024-12-05 21:21:50.973492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.974 [2024-12-05 21:21:50.973498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.974 [2024-12-05 21:21:50.973504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.975 [2024-12-05 21:21:50.973518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.975 qpair failed and we were unable to recover it. 00:28:42.975 [2024-12-05 21:21:50.983460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.975 [2024-12-05 21:21:50.983510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.975 [2024-12-05 21:21:50.983523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.975 [2024-12-05 21:21:50.983529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.975 [2024-12-05 21:21:50.983535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.975 [2024-12-05 21:21:50.983549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.975 qpair failed and we were unable to recover it. 00:28:42.975 [2024-12-05 21:21:50.993484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.975 [2024-12-05 21:21:50.993534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.975 [2024-12-05 21:21:50.993546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.975 [2024-12-05 21:21:50.993553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.975 [2024-12-05 21:21:50.993558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.975 [2024-12-05 21:21:50.993573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.975 qpair failed and we were unable to recover it. 00:28:42.975 [2024-12-05 21:21:51.003594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.975 [2024-12-05 21:21:51.003684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.975 [2024-12-05 21:21:51.003697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.975 [2024-12-05 21:21:51.003703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.975 [2024-12-05 21:21:51.003709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.975 [2024-12-05 21:21:51.003723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.975 qpair failed and we were unable to recover it. 00:28:42.975 [2024-12-05 21:21:51.013547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.975 [2024-12-05 21:21:51.013646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.975 [2024-12-05 21:21:51.013660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.975 [2024-12-05 21:21:51.013666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.975 [2024-12-05 21:21:51.013672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.975 [2024-12-05 21:21:51.013686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.975 qpair failed and we were unable to recover it. 00:28:42.975 [2024-12-05 21:21:51.023605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.975 [2024-12-05 21:21:51.023660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.975 [2024-12-05 21:21:51.023673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.975 [2024-12-05 21:21:51.023679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.975 [2024-12-05 21:21:51.023685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.975 [2024-12-05 21:21:51.023699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.975 qpair failed and we were unable to recover it. 00:28:42.975 [2024-12-05 21:21:51.033606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.975 [2024-12-05 21:21:51.033660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.975 [2024-12-05 21:21:51.033673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.975 [2024-12-05 21:21:51.033680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.975 [2024-12-05 21:21:51.033686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.975 [2024-12-05 21:21:51.033700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.975 qpair failed and we were unable to recover it. 00:28:42.975 [2024-12-05 21:21:51.043644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.975 [2024-12-05 21:21:51.043697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.975 [2024-12-05 21:21:51.043711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.975 [2024-12-05 21:21:51.043717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.975 [2024-12-05 21:21:51.043723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.975 [2024-12-05 21:21:51.043736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.975 qpair failed and we were unable to recover it. 00:28:42.975 [2024-12-05 21:21:51.053681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.975 [2024-12-05 21:21:51.053737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.975 [2024-12-05 21:21:51.053750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.975 [2024-12-05 21:21:51.053757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.975 [2024-12-05 21:21:51.053763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.975 [2024-12-05 21:21:51.053777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.975 qpair failed and we were unable to recover it. 00:28:42.975 [2024-12-05 21:21:51.063691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.975 [2024-12-05 21:21:51.063747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.975 [2024-12-05 21:21:51.063760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.975 [2024-12-05 21:21:51.063766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.975 [2024-12-05 21:21:51.063772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.975 [2024-12-05 21:21:51.063786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.975 qpair failed and we were unable to recover it. 00:28:42.975 [2024-12-05 21:21:51.073715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.975 [2024-12-05 21:21:51.073766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.975 [2024-12-05 21:21:51.073779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.975 [2024-12-05 21:21:51.073786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.975 [2024-12-05 21:21:51.073792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:42.975 [2024-12-05 21:21:51.073806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:42.975 qpair failed and we were unable to recover it. 00:28:43.234 [2024-12-05 21:21:51.083779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.234 [2024-12-05 21:21:51.083861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.234 [2024-12-05 21:21:51.083874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.234 [2024-12-05 21:21:51.083881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.234 [2024-12-05 21:21:51.083887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.234 [2024-12-05 21:21:51.083901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.234 qpair failed and we were unable to recover it. 00:28:43.234 [2024-12-05 21:21:51.093711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.234 [2024-12-05 21:21:51.093765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.234 [2024-12-05 21:21:51.093778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.234 [2024-12-05 21:21:51.093788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.234 [2024-12-05 21:21:51.093794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.234 [2024-12-05 21:21:51.093809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.234 qpair failed and we were unable to recover it. 00:28:43.234 [2024-12-05 21:21:51.103837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.234 [2024-12-05 21:21:51.103889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.234 [2024-12-05 21:21:51.103902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.234 [2024-12-05 21:21:51.103908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.234 [2024-12-05 21:21:51.103914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.234 [2024-12-05 21:21:51.103928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.234 qpair failed and we were unable to recover it. 00:28:43.234 [2024-12-05 21:21:51.113816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.234 [2024-12-05 21:21:51.113867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.234 [2024-12-05 21:21:51.113880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.234 [2024-12-05 21:21:51.113886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.234 [2024-12-05 21:21:51.113892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.234 [2024-12-05 21:21:51.113906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.234 qpair failed and we were unable to recover it. 00:28:43.234 [2024-12-05 21:21:51.123854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.234 [2024-12-05 21:21:51.123909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.234 [2024-12-05 21:21:51.123922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.234 [2024-12-05 21:21:51.123928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.234 [2024-12-05 21:21:51.123934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.234 [2024-12-05 21:21:51.123948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.234 qpair failed and we were unable to recover it. 00:28:43.234 [2024-12-05 21:21:51.133920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.234 [2024-12-05 21:21:51.133976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.234 [2024-12-05 21:21:51.133988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.234 [2024-12-05 21:21:51.133994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.234 [2024-12-05 21:21:51.134000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.234 [2024-12-05 21:21:51.134020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.234 qpair failed and we were unable to recover it. 00:28:43.234 [2024-12-05 21:21:51.143907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.234 [2024-12-05 21:21:51.143973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.234 [2024-12-05 21:21:51.143986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.234 [2024-12-05 21:21:51.143993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.234 [2024-12-05 21:21:51.143999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.234 [2024-12-05 21:21:51.144013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.234 qpair failed and we were unable to recover it. 00:28:43.235 [2024-12-05 21:21:51.153929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.235 [2024-12-05 21:21:51.153983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.235 [2024-12-05 21:21:51.153997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.235 [2024-12-05 21:21:51.154003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.235 [2024-12-05 21:21:51.154009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.235 [2024-12-05 21:21:51.154022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.235 qpair failed and we were unable to recover it. 00:28:43.235 [2024-12-05 21:21:51.163969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.235 [2024-12-05 21:21:51.164024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.235 [2024-12-05 21:21:51.164037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.235 [2024-12-05 21:21:51.164043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.235 [2024-12-05 21:21:51.164050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.235 [2024-12-05 21:21:51.164064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.235 qpair failed and we were unable to recover it. 00:28:43.235 [2024-12-05 21:21:51.174050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.235 [2024-12-05 21:21:51.174128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.235 [2024-12-05 21:21:51.174140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.235 [2024-12-05 21:21:51.174147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.235 [2024-12-05 21:21:51.174153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.235 [2024-12-05 21:21:51.174167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.235 qpair failed and we were unable to recover it. 00:28:43.235 [2024-12-05 21:21:51.184011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.235 [2024-12-05 21:21:51.184065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.235 [2024-12-05 21:21:51.184078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.235 [2024-12-05 21:21:51.184084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.235 [2024-12-05 21:21:51.184090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.235 [2024-12-05 21:21:51.184104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.235 qpair failed and we were unable to recover it. 00:28:43.235 [2024-12-05 21:21:51.194028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.235 [2024-12-05 21:21:51.194080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.235 [2024-12-05 21:21:51.194093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.235 [2024-12-05 21:21:51.194099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.235 [2024-12-05 21:21:51.194105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.235 [2024-12-05 21:21:51.194120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.235 qpair failed and we were unable to recover it. 00:28:43.235 [2024-12-05 21:21:51.204073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.235 [2024-12-05 21:21:51.204127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.235 [2024-12-05 21:21:51.204140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.235 [2024-12-05 21:21:51.204146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.235 [2024-12-05 21:21:51.204152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.235 [2024-12-05 21:21:51.204167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.235 qpair failed and we were unable to recover it. 00:28:43.235 [2024-12-05 21:21:51.214098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.235 [2024-12-05 21:21:51.214148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.235 [2024-12-05 21:21:51.214161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.235 [2024-12-05 21:21:51.214168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.235 [2024-12-05 21:21:51.214174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.235 [2024-12-05 21:21:51.214189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.235 qpair failed and we were unable to recover it. 00:28:43.235 [2024-12-05 21:21:51.224118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.235 [2024-12-05 21:21:51.224167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.235 [2024-12-05 21:21:51.224180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.235 [2024-12-05 21:21:51.224190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.235 [2024-12-05 21:21:51.224195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.235 [2024-12-05 21:21:51.224210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.235 qpair failed and we were unable to recover it. 00:28:43.235 [2024-12-05 21:21:51.234146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.235 [2024-12-05 21:21:51.234247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.235 [2024-12-05 21:21:51.234260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.235 [2024-12-05 21:21:51.234267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.235 [2024-12-05 21:21:51.234273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.235 [2024-12-05 21:21:51.234287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.235 qpair failed and we were unable to recover it. 00:28:43.235 [2024-12-05 21:21:51.244197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.235 [2024-12-05 21:21:51.244276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.235 [2024-12-05 21:21:51.244289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.235 [2024-12-05 21:21:51.244295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.235 [2024-12-05 21:21:51.244301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.235 [2024-12-05 21:21:51.244315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.235 qpair failed and we were unable to recover it. 00:28:43.235 [2024-12-05 21:21:51.254251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.235 [2024-12-05 21:21:51.254302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.235 [2024-12-05 21:21:51.254315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.235 [2024-12-05 21:21:51.254322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.235 [2024-12-05 21:21:51.254328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.235 [2024-12-05 21:21:51.254343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.235 qpair failed and we were unable to recover it. 00:28:43.235 [2024-12-05 21:21:51.264279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.235 [2024-12-05 21:21:51.264334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.235 [2024-12-05 21:21:51.264347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.235 [2024-12-05 21:21:51.264354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.235 [2024-12-05 21:21:51.264361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.235 [2024-12-05 21:21:51.264383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.235 qpair failed and we were unable to recover it. 00:28:43.235 [2024-12-05 21:21:51.274283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.236 [2024-12-05 21:21:51.274345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.236 [2024-12-05 21:21:51.274359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.236 [2024-12-05 21:21:51.274369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.236 [2024-12-05 21:21:51.274376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.236 [2024-12-05 21:21:51.274390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.236 qpair failed and we were unable to recover it. 00:28:43.236 [2024-12-05 21:21:51.284273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.236 [2024-12-05 21:21:51.284329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.236 [2024-12-05 21:21:51.284343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.236 [2024-12-05 21:21:51.284349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.236 [2024-12-05 21:21:51.284355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.236 [2024-12-05 21:21:51.284373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.236 qpair failed and we were unable to recover it. 00:28:43.236 [2024-12-05 21:21:51.294241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.236 [2024-12-05 21:21:51.294325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.236 [2024-12-05 21:21:51.294338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.236 [2024-12-05 21:21:51.294345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.236 [2024-12-05 21:21:51.294351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.236 [2024-12-05 21:21:51.294365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.236 qpair failed and we were unable to recover it. 00:28:43.236 [2024-12-05 21:21:51.304355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.236 [2024-12-05 21:21:51.304417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.236 [2024-12-05 21:21:51.304430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.236 [2024-12-05 21:21:51.304437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.236 [2024-12-05 21:21:51.304443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.236 [2024-12-05 21:21:51.304457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.236 qpair failed and we were unable to recover it. 00:28:43.236 [2024-12-05 21:21:51.314390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.236 [2024-12-05 21:21:51.314455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.236 [2024-12-05 21:21:51.314469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.236 [2024-12-05 21:21:51.314475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.236 [2024-12-05 21:21:51.314481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.236 [2024-12-05 21:21:51.314495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.236 qpair failed and we were unable to recover it. 00:28:43.236 [2024-12-05 21:21:51.324439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.236 [2024-12-05 21:21:51.324516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.236 [2024-12-05 21:21:51.324529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.236 [2024-12-05 21:21:51.324535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.236 [2024-12-05 21:21:51.324541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.236 [2024-12-05 21:21:51.324555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.236 qpair failed and we were unable to recover it. 00:28:43.236 [2024-12-05 21:21:51.334472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.236 [2024-12-05 21:21:51.334525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.236 [2024-12-05 21:21:51.334538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.236 [2024-12-05 21:21:51.334545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.236 [2024-12-05 21:21:51.334551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.236 [2024-12-05 21:21:51.334565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.236 qpair failed and we were unable to recover it. 00:28:43.496 [2024-12-05 21:21:51.344503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.496 [2024-12-05 21:21:51.344590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.496 [2024-12-05 21:21:51.344603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.496 [2024-12-05 21:21:51.344609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.496 [2024-12-05 21:21:51.344615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.496 [2024-12-05 21:21:51.344630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.496 qpair failed and we were unable to recover it. 00:28:43.496 [2024-12-05 21:21:51.354496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.496 [2024-12-05 21:21:51.354552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.496 [2024-12-05 21:21:51.354567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.496 [2024-12-05 21:21:51.354574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.496 [2024-12-05 21:21:51.354580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.496 [2024-12-05 21:21:51.354594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.496 qpair failed and we were unable to recover it. 00:28:43.496 [2024-12-05 21:21:51.364529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.496 [2024-12-05 21:21:51.364583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.496 [2024-12-05 21:21:51.364596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.496 [2024-12-05 21:21:51.364602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.496 [2024-12-05 21:21:51.364609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.496 [2024-12-05 21:21:51.364623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.496 qpair failed and we were unable to recover it. 00:28:43.496 [2024-12-05 21:21:51.374551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.496 [2024-12-05 21:21:51.374615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.496 [2024-12-05 21:21:51.374628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.496 [2024-12-05 21:21:51.374635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.496 [2024-12-05 21:21:51.374640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.496 [2024-12-05 21:21:51.374655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.496 qpair failed and we were unable to recover it. 00:28:43.496 [2024-12-05 21:21:51.384575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.496 [2024-12-05 21:21:51.384626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.496 [2024-12-05 21:21:51.384638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.496 [2024-12-05 21:21:51.384645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.496 [2024-12-05 21:21:51.384651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.496 [2024-12-05 21:21:51.384665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.496 qpair failed and we were unable to recover it. 00:28:43.496 [2024-12-05 21:21:51.394600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.496 [2024-12-05 21:21:51.394655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.496 [2024-12-05 21:21:51.394668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.496 [2024-12-05 21:21:51.394674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.496 [2024-12-05 21:21:51.394683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.496 [2024-12-05 21:21:51.394698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.496 qpair failed and we were unable to recover it. 00:28:43.496 [2024-12-05 21:21:51.404634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.496 [2024-12-05 21:21:51.404684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.496 [2024-12-05 21:21:51.404697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.496 [2024-12-05 21:21:51.404703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.496 [2024-12-05 21:21:51.404710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.496 [2024-12-05 21:21:51.404724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.496 qpair failed and we were unable to recover it. 00:28:43.496 [2024-12-05 21:21:51.414651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.496 [2024-12-05 21:21:51.414706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.496 [2024-12-05 21:21:51.414718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.496 [2024-12-05 21:21:51.414725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.496 [2024-12-05 21:21:51.414731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.496 [2024-12-05 21:21:51.414745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.496 qpair failed and we were unable to recover it. 00:28:43.496 [2024-12-05 21:21:51.424715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.496 [2024-12-05 21:21:51.424776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.496 [2024-12-05 21:21:51.424789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.496 [2024-12-05 21:21:51.424796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.496 [2024-12-05 21:21:51.424801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.496 [2024-12-05 21:21:51.424816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.496 qpair failed and we were unable to recover it. 00:28:43.496 [2024-12-05 21:21:51.434708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.496 [2024-12-05 21:21:51.434784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.496 [2024-12-05 21:21:51.434797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.496 [2024-12-05 21:21:51.434804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.496 [2024-12-05 21:21:51.434810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.496 [2024-12-05 21:21:51.434824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.496 qpair failed and we were unable to recover it. 00:28:43.496 [2024-12-05 21:21:51.444743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.496 [2024-12-05 21:21:51.444800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.496 [2024-12-05 21:21:51.444813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.496 [2024-12-05 21:21:51.444819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.496 [2024-12-05 21:21:51.444825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.496 [2024-12-05 21:21:51.444839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.496 qpair failed and we were unable to recover it. 00:28:43.496 [2024-12-05 21:21:51.454765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.496 [2024-12-05 21:21:51.454818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.496 [2024-12-05 21:21:51.454830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.496 [2024-12-05 21:21:51.454837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.496 [2024-12-05 21:21:51.454843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.496 [2024-12-05 21:21:51.454858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.496 qpair failed and we were unable to recover it. 00:28:43.496 [2024-12-05 21:21:51.464739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.496 [2024-12-05 21:21:51.464842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.496 [2024-12-05 21:21:51.464855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.496 [2024-12-05 21:21:51.464861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.496 [2024-12-05 21:21:51.464867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.496 [2024-12-05 21:21:51.464881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.496 qpair failed and we were unable to recover it. 00:28:43.496 [2024-12-05 21:21:51.474830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.496 [2024-12-05 21:21:51.474878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.497 [2024-12-05 21:21:51.474891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.497 [2024-12-05 21:21:51.474897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.497 [2024-12-05 21:21:51.474903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.497 [2024-12-05 21:21:51.474917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.497 qpair failed and we were unable to recover it. 00:28:43.497 [2024-12-05 21:21:51.484856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.497 [2024-12-05 21:21:51.484919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.497 [2024-12-05 21:21:51.484934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.497 [2024-12-05 21:21:51.484941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.497 [2024-12-05 21:21:51.484946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.497 [2024-12-05 21:21:51.484961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.497 qpair failed and we were unable to recover it. 00:28:43.497 [2024-12-05 21:21:51.494911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.497 [2024-12-05 21:21:51.494963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.497 [2024-12-05 21:21:51.494976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.497 [2024-12-05 21:21:51.494983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.497 [2024-12-05 21:21:51.494989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.497 [2024-12-05 21:21:51.495003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.497 qpair failed and we were unable to recover it. 00:28:43.497 [2024-12-05 21:21:51.504903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.497 [2024-12-05 21:21:51.504959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.497 [2024-12-05 21:21:51.504971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.497 [2024-12-05 21:21:51.504978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.497 [2024-12-05 21:21:51.504984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.497 [2024-12-05 21:21:51.504998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.497 qpair failed and we were unable to recover it. 00:28:43.497 [2024-12-05 21:21:51.514935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.497 [2024-12-05 21:21:51.514988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.497 [2024-12-05 21:21:51.515001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.497 [2024-12-05 21:21:51.515007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.497 [2024-12-05 21:21:51.515013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.497 [2024-12-05 21:21:51.515027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.497 qpair failed and we were unable to recover it. 00:28:43.497 [2024-12-05 21:21:51.525000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.497 [2024-12-05 21:21:51.525084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.497 [2024-12-05 21:21:51.525096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.497 [2024-12-05 21:21:51.525103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.497 [2024-12-05 21:21:51.525111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.497 [2024-12-05 21:21:51.525126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.497 qpair failed and we were unable to recover it. 00:28:43.497 [2024-12-05 21:21:51.534999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.497 [2024-12-05 21:21:51.535050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.497 [2024-12-05 21:21:51.535063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.497 [2024-12-05 21:21:51.535069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.497 [2024-12-05 21:21:51.535074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.497 [2024-12-05 21:21:51.535089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.497 qpair failed and we were unable to recover it. 00:28:43.497 [2024-12-05 21:21:51.545031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.497 [2024-12-05 21:21:51.545086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.497 [2024-12-05 21:21:51.545099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.497 [2024-12-05 21:21:51.545105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.497 [2024-12-05 21:21:51.545111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.497 [2024-12-05 21:21:51.545125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.497 qpair failed and we were unable to recover it. 00:28:43.497 [2024-12-05 21:21:51.555057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.497 [2024-12-05 21:21:51.555108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.497 [2024-12-05 21:21:51.555122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.497 [2024-12-05 21:21:51.555128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.497 [2024-12-05 21:21:51.555134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.497 [2024-12-05 21:21:51.555148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.497 qpair failed and we were unable to recover it. 00:28:43.497 [2024-12-05 21:21:51.565084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.497 [2024-12-05 21:21:51.565153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.497 [2024-12-05 21:21:51.565167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.497 [2024-12-05 21:21:51.565174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.497 [2024-12-05 21:21:51.565180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.497 [2024-12-05 21:21:51.565195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.497 qpair failed and we were unable to recover it. 00:28:43.497 [2024-12-05 21:21:51.575124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.497 [2024-12-05 21:21:51.575178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.497 [2024-12-05 21:21:51.575191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.497 [2024-12-05 21:21:51.575198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.497 [2024-12-05 21:21:51.575204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.497 [2024-12-05 21:21:51.575219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.497 qpair failed and we were unable to recover it. 00:28:43.497 [2024-12-05 21:21:51.585145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.497 [2024-12-05 21:21:51.585201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.497 [2024-12-05 21:21:51.585214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.497 [2024-12-05 21:21:51.585221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.497 [2024-12-05 21:21:51.585227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.497 [2024-12-05 21:21:51.585241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.497 qpair failed and we were unable to recover it. 00:28:43.497 [2024-12-05 21:21:51.595168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.497 [2024-12-05 21:21:51.595217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.497 [2024-12-05 21:21:51.595231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.497 [2024-12-05 21:21:51.595237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.497 [2024-12-05 21:21:51.595243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.497 [2024-12-05 21:21:51.595258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.497 qpair failed and we were unable to recover it. 00:28:43.756 [2024-12-05 21:21:51.605228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.756 [2024-12-05 21:21:51.605300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.756 [2024-12-05 21:21:51.605314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.756 [2024-12-05 21:21:51.605320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.756 [2024-12-05 21:21:51.605326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.756 [2024-12-05 21:21:51.605340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.756 qpair failed and we were unable to recover it. 00:28:43.756 [2024-12-05 21:21:51.615235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.756 [2024-12-05 21:21:51.615298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.756 [2024-12-05 21:21:51.615312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.756 [2024-12-05 21:21:51.615318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.756 [2024-12-05 21:21:51.615324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.756 [2024-12-05 21:21:51.615338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.756 qpair failed and we were unable to recover it. 00:28:43.756 [2024-12-05 21:21:51.625204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.757 [2024-12-05 21:21:51.625297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.757 [2024-12-05 21:21:51.625310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.757 [2024-12-05 21:21:51.625316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.757 [2024-12-05 21:21:51.625322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.757 [2024-12-05 21:21:51.625336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.757 qpair failed and we were unable to recover it. 00:28:43.757 [2024-12-05 21:21:51.635207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.757 [2024-12-05 21:21:51.635262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.757 [2024-12-05 21:21:51.635275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.757 [2024-12-05 21:21:51.635282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.757 [2024-12-05 21:21:51.635288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.757 [2024-12-05 21:21:51.635302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.757 qpair failed and we were unable to recover it. 00:28:43.757 [2024-12-05 21:21:51.645349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.757 [2024-12-05 21:21:51.645411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.757 [2024-12-05 21:21:51.645425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.757 [2024-12-05 21:21:51.645432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.757 [2024-12-05 21:21:51.645438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.757 [2024-12-05 21:21:51.645453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.757 qpair failed and we were unable to recover it. 00:28:43.757 [2024-12-05 21:21:51.655323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.757 [2024-12-05 21:21:51.655383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.757 [2024-12-05 21:21:51.655397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.757 [2024-12-05 21:21:51.655406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.757 [2024-12-05 21:21:51.655412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.757 [2024-12-05 21:21:51.655427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.757 qpair failed and we were unable to recover it. 00:28:43.757 [2024-12-05 21:21:51.665298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.757 [2024-12-05 21:21:51.665352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.757 [2024-12-05 21:21:51.665365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.757 [2024-12-05 21:21:51.665376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.757 [2024-12-05 21:21:51.665382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.757 [2024-12-05 21:21:51.665396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.757 qpair failed and we were unable to recover it. 00:28:43.757 [2024-12-05 21:21:51.675478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.757 [2024-12-05 21:21:51.675554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.757 [2024-12-05 21:21:51.675567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.757 [2024-12-05 21:21:51.675574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.757 [2024-12-05 21:21:51.675579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.757 [2024-12-05 21:21:51.675594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.757 qpair failed and we were unable to recover it. 00:28:43.757 [2024-12-05 21:21:51.685426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.757 [2024-12-05 21:21:51.685481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.757 [2024-12-05 21:21:51.685493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.757 [2024-12-05 21:21:51.685500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.757 [2024-12-05 21:21:51.685506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.757 [2024-12-05 21:21:51.685520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.757 qpair failed and we were unable to recover it. 00:28:43.757 [2024-12-05 21:21:51.695469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.757 [2024-12-05 21:21:51.695528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.757 [2024-12-05 21:21:51.695541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.757 [2024-12-05 21:21:51.695547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.757 [2024-12-05 21:21:51.695553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.757 [2024-12-05 21:21:51.695570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.757 qpair failed and we were unable to recover it. 00:28:43.757 [2024-12-05 21:21:51.705478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.757 [2024-12-05 21:21:51.705526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.757 [2024-12-05 21:21:51.705539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.757 [2024-12-05 21:21:51.705545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.757 [2024-12-05 21:21:51.705551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.757 [2024-12-05 21:21:51.705565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.757 qpair failed and we were unable to recover it. 00:28:43.757 [2024-12-05 21:21:51.715541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.757 [2024-12-05 21:21:51.715633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.757 [2024-12-05 21:21:51.715646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.757 [2024-12-05 21:21:51.715653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.757 [2024-12-05 21:21:51.715659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.757 [2024-12-05 21:21:51.715674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.757 qpair failed and we were unable to recover it. 00:28:43.757 [2024-12-05 21:21:51.725540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.757 [2024-12-05 21:21:51.725598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.757 [2024-12-05 21:21:51.725611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.757 [2024-12-05 21:21:51.725618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.757 [2024-12-05 21:21:51.725624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.757 [2024-12-05 21:21:51.725638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.757 qpair failed and we were unable to recover it. 00:28:43.757 [2024-12-05 21:21:51.735567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.757 [2024-12-05 21:21:51.735624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.757 [2024-12-05 21:21:51.735637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.757 [2024-12-05 21:21:51.735643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.757 [2024-12-05 21:21:51.735649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.757 [2024-12-05 21:21:51.735663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.757 qpair failed and we were unable to recover it. 00:28:43.757 [2024-12-05 21:21:51.745654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.757 [2024-12-05 21:21:51.745708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.757 [2024-12-05 21:21:51.745721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.757 [2024-12-05 21:21:51.745728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.757 [2024-12-05 21:21:51.745734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.757 [2024-12-05 21:21:51.745748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.757 qpair failed and we were unable to recover it. 00:28:43.757 [2024-12-05 21:21:51.755569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.757 [2024-12-05 21:21:51.755623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.758 [2024-12-05 21:21:51.755636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.758 [2024-12-05 21:21:51.755642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.758 [2024-12-05 21:21:51.755648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.758 [2024-12-05 21:21:51.755662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.758 qpair failed and we were unable to recover it. 00:28:43.758 [2024-12-05 21:21:51.765602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.758 [2024-12-05 21:21:51.765655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.758 [2024-12-05 21:21:51.765668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.758 [2024-12-05 21:21:51.765675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.758 [2024-12-05 21:21:51.765681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.758 [2024-12-05 21:21:51.765695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.758 qpair failed and we were unable to recover it. 00:28:43.758 [2024-12-05 21:21:51.775609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.758 [2024-12-05 21:21:51.775659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.758 [2024-12-05 21:21:51.775672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.758 [2024-12-05 21:21:51.775678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.758 [2024-12-05 21:21:51.775684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.758 [2024-12-05 21:21:51.775699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.758 qpair failed and we were unable to recover it. 00:28:43.758 [2024-12-05 21:21:51.785658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.758 [2024-12-05 21:21:51.785714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.758 [2024-12-05 21:21:51.785730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.758 [2024-12-05 21:21:51.785737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.758 [2024-12-05 21:21:51.785743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.758 [2024-12-05 21:21:51.785756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.758 qpair failed and we were unable to recover it. 00:28:43.758 [2024-12-05 21:21:51.795759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.758 [2024-12-05 21:21:51.795816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.758 [2024-12-05 21:21:51.795829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.758 [2024-12-05 21:21:51.795835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.758 [2024-12-05 21:21:51.795841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.758 [2024-12-05 21:21:51.795856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.758 qpair failed and we were unable to recover it. 00:28:43.758 [2024-12-05 21:21:51.805726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.758 [2024-12-05 21:21:51.805814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.758 [2024-12-05 21:21:51.805826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.758 [2024-12-05 21:21:51.805833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.758 [2024-12-05 21:21:51.805839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.758 [2024-12-05 21:21:51.805854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.758 qpair failed and we were unable to recover it. 00:28:43.758 [2024-12-05 21:21:51.815787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.758 [2024-12-05 21:21:51.815842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.758 [2024-12-05 21:21:51.815855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.758 [2024-12-05 21:21:51.815862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.758 [2024-12-05 21:21:51.815867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.758 [2024-12-05 21:21:51.815882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.758 qpair failed and we were unable to recover it. 00:28:43.758 [2024-12-05 21:21:51.825834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.758 [2024-12-05 21:21:51.825891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.758 [2024-12-05 21:21:51.825904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.758 [2024-12-05 21:21:51.825911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.758 [2024-12-05 21:21:51.825916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.758 [2024-12-05 21:21:51.825933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.758 qpair failed and we were unable to recover it. 00:28:43.758 [2024-12-05 21:21:51.835786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.758 [2024-12-05 21:21:51.835882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.758 [2024-12-05 21:21:51.835895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.758 [2024-12-05 21:21:51.835902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.758 [2024-12-05 21:21:51.835907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.758 [2024-12-05 21:21:51.835922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.758 qpair failed and we were unable to recover it. 00:28:43.758 [2024-12-05 21:21:51.845877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.758 [2024-12-05 21:21:51.845933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.758 [2024-12-05 21:21:51.845946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.758 [2024-12-05 21:21:51.845952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.758 [2024-12-05 21:21:51.845958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.758 [2024-12-05 21:21:51.845972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.758 qpair failed and we were unable to recover it. 00:28:43.758 [2024-12-05 21:21:51.855993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.758 [2024-12-05 21:21:51.856078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.758 [2024-12-05 21:21:51.856091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.758 [2024-12-05 21:21:51.856098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.758 [2024-12-05 21:21:51.856103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:43.758 [2024-12-05 21:21:51.856117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.758 qpair failed and we were unable to recover it. 00:28:44.018 [2024-12-05 21:21:51.865953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.018 [2024-12-05 21:21:51.866030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.018 [2024-12-05 21:21:51.866042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.018 [2024-12-05 21:21:51.866049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.018 [2024-12-05 21:21:51.866055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.018 [2024-12-05 21:21:51.866069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.018 qpair failed and we were unable to recover it. 00:28:44.018 [2024-12-05 21:21:51.875902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.018 [2024-12-05 21:21:51.875957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.018 [2024-12-05 21:21:51.875970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.018 [2024-12-05 21:21:51.875976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.018 [2024-12-05 21:21:51.875982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.018 [2024-12-05 21:21:51.875996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.018 qpair failed and we were unable to recover it. 00:28:44.018 [2024-12-05 21:21:51.886002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.018 [2024-12-05 21:21:51.886060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.018 [2024-12-05 21:21:51.886074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.018 [2024-12-05 21:21:51.886080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.018 [2024-12-05 21:21:51.886086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.018 [2024-12-05 21:21:51.886101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.018 qpair failed and we were unable to recover it. 00:28:44.018 [2024-12-05 21:21:51.896013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.018 [2024-12-05 21:21:51.896063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.018 [2024-12-05 21:21:51.896076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.018 [2024-12-05 21:21:51.896083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.018 [2024-12-05 21:21:51.896089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.018 [2024-12-05 21:21:51.896103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.018 qpair failed and we were unable to recover it. 00:28:44.018 [2024-12-05 21:21:51.905990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.018 [2024-12-05 21:21:51.906083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.018 [2024-12-05 21:21:51.906096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.018 [2024-12-05 21:21:51.906102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.018 [2024-12-05 21:21:51.906108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.018 [2024-12-05 21:21:51.906122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.018 qpair failed and we were unable to recover it. 00:28:44.018 [2024-12-05 21:21:51.916016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.018 [2024-12-05 21:21:51.916067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.018 [2024-12-05 21:21:51.916086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.018 [2024-12-05 21:21:51.916093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.018 [2024-12-05 21:21:51.916098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.018 [2024-12-05 21:21:51.916113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.018 qpair failed and we were unable to recover it. 00:28:44.018 [2024-12-05 21:21:51.926063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.018 [2024-12-05 21:21:51.926116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.018 [2024-12-05 21:21:51.926128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.018 [2024-12-05 21:21:51.926135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.018 [2024-12-05 21:21:51.926141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.018 [2024-12-05 21:21:51.926156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.018 qpair failed and we were unable to recover it. 00:28:44.018 [2024-12-05 21:21:51.936170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.018 [2024-12-05 21:21:51.936217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.018 [2024-12-05 21:21:51.936230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.018 [2024-12-05 21:21:51.936236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.018 [2024-12-05 21:21:51.936242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.018 [2024-12-05 21:21:51.936257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.018 qpair failed and we were unable to recover it. 00:28:44.018 [2024-12-05 21:21:51.946184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.018 [2024-12-05 21:21:51.946238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.018 [2024-12-05 21:21:51.946251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.018 [2024-12-05 21:21:51.946258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.018 [2024-12-05 21:21:51.946264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.018 [2024-12-05 21:21:51.946279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.018 qpair failed and we were unable to recover it. 00:28:44.018 [2024-12-05 21:21:51.956198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.018 [2024-12-05 21:21:51.956249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.018 [2024-12-05 21:21:51.956263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.018 [2024-12-05 21:21:51.956269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.018 [2024-12-05 21:21:51.956277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.018 [2024-12-05 21:21:51.956292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.018 qpair failed and we were unable to recover it. 00:28:44.018 [2024-12-05 21:21:51.966185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.018 [2024-12-05 21:21:51.966239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.018 [2024-12-05 21:21:51.966252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.018 [2024-12-05 21:21:51.966259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.018 [2024-12-05 21:21:51.966265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.018 [2024-12-05 21:21:51.966279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.018 qpair failed and we were unable to recover it. 00:28:44.018 [2024-12-05 21:21:51.976267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.018 [2024-12-05 21:21:51.976321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.018 [2024-12-05 21:21:51.976334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.018 [2024-12-05 21:21:51.976340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.018 [2024-12-05 21:21:51.976346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.018 [2024-12-05 21:21:51.976360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.019 qpair failed and we were unable to recover it. 00:28:44.019 [2024-12-05 21:21:51.986230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.019 [2024-12-05 21:21:51.986323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.019 [2024-12-05 21:21:51.986336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.019 [2024-12-05 21:21:51.986343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.019 [2024-12-05 21:21:51.986349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.019 [2024-12-05 21:21:51.986363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.019 qpair failed and we were unable to recover it. 00:28:44.019 [2024-12-05 21:21:51.996351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.019 [2024-12-05 21:21:51.996422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.019 [2024-12-05 21:21:51.996435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.019 [2024-12-05 21:21:51.996442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.019 [2024-12-05 21:21:51.996447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.019 [2024-12-05 21:21:51.996462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.019 qpair failed and we were unable to recover it. 00:28:44.019 [2024-12-05 21:21:52.006293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.019 [2024-12-05 21:21:52.006376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.019 [2024-12-05 21:21:52.006391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.019 [2024-12-05 21:21:52.006398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.019 [2024-12-05 21:21:52.006404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.019 [2024-12-05 21:21:52.006419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.019 qpair failed and we were unable to recover it. 00:28:44.019 [2024-12-05 21:21:52.016410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.019 [2024-12-05 21:21:52.016466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.019 [2024-12-05 21:21:52.016480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.019 [2024-12-05 21:21:52.016487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.019 [2024-12-05 21:21:52.016493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.019 [2024-12-05 21:21:52.016507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.019 qpair failed and we were unable to recover it. 00:28:44.019 [2024-12-05 21:21:52.026331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.019 [2024-12-05 21:21:52.026390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.019 [2024-12-05 21:21:52.026404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.019 [2024-12-05 21:21:52.026410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.019 [2024-12-05 21:21:52.026416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.019 [2024-12-05 21:21:52.026431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.019 qpair failed and we were unable to recover it. 00:28:44.019 [2024-12-05 21:21:52.036440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.019 [2024-12-05 21:21:52.036535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.019 [2024-12-05 21:21:52.036549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.019 [2024-12-05 21:21:52.036555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.019 [2024-12-05 21:21:52.036560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.019 [2024-12-05 21:21:52.036574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.019 qpair failed and we were unable to recover it. 00:28:44.019 [2024-12-05 21:21:52.046437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.019 [2024-12-05 21:21:52.046493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.019 [2024-12-05 21:21:52.046509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.019 [2024-12-05 21:21:52.046515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.019 [2024-12-05 21:21:52.046521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.019 [2024-12-05 21:21:52.046536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.019 qpair failed and we were unable to recover it. 00:28:44.019 [2024-12-05 21:21:52.056433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.019 [2024-12-05 21:21:52.056488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.019 [2024-12-05 21:21:52.056501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.019 [2024-12-05 21:21:52.056508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.019 [2024-12-05 21:21:52.056514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.019 [2024-12-05 21:21:52.056529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.019 qpair failed and we were unable to recover it. 00:28:44.019 [2024-12-05 21:21:52.066534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.019 [2024-12-05 21:21:52.066596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.019 [2024-12-05 21:21:52.066610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.019 [2024-12-05 21:21:52.066616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.019 [2024-12-05 21:21:52.066622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.019 [2024-12-05 21:21:52.066636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.019 qpair failed and we were unable to recover it. 00:28:44.019 [2024-12-05 21:21:52.076550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.019 [2024-12-05 21:21:52.076605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.019 [2024-12-05 21:21:52.076618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.019 [2024-12-05 21:21:52.076624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.019 [2024-12-05 21:21:52.076630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.019 [2024-12-05 21:21:52.076644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.019 qpair failed and we were unable to recover it. 00:28:44.019 [2024-12-05 21:21:52.086613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.019 [2024-12-05 21:21:52.086667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.019 [2024-12-05 21:21:52.086682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.019 [2024-12-05 21:21:52.086691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.019 [2024-12-05 21:21:52.086697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.019 [2024-12-05 21:21:52.086712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.019 qpair failed and we were unable to recover it. 00:28:44.019 [2024-12-05 21:21:52.096614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.019 [2024-12-05 21:21:52.096667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.019 [2024-12-05 21:21:52.096680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.019 [2024-12-05 21:21:52.096687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.019 [2024-12-05 21:21:52.096693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.019 [2024-12-05 21:21:52.096708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.019 qpair failed and we were unable to recover it. 00:28:44.019 [2024-12-05 21:21:52.106652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.019 [2024-12-05 21:21:52.106716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.019 [2024-12-05 21:21:52.106730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.019 [2024-12-05 21:21:52.106736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.019 [2024-12-05 21:21:52.106742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.019 [2024-12-05 21:21:52.106756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.020 qpair failed and we were unable to recover it. 00:28:44.020 [2024-12-05 21:21:52.116704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.020 [2024-12-05 21:21:52.116801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.020 [2024-12-05 21:21:52.116815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.020 [2024-12-05 21:21:52.116821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.020 [2024-12-05 21:21:52.116827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.020 [2024-12-05 21:21:52.116841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.020 qpair failed and we were unable to recover it. 00:28:44.283 [2024-12-05 21:21:52.126693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.283 [2024-12-05 21:21:52.126744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.283 [2024-12-05 21:21:52.126757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.283 [2024-12-05 21:21:52.126763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.283 [2024-12-05 21:21:52.126769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.283 [2024-12-05 21:21:52.126784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.283 qpair failed and we were unable to recover it. 00:28:44.283 [2024-12-05 21:21:52.136761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.283 [2024-12-05 21:21:52.136849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.283 [2024-12-05 21:21:52.136862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.283 [2024-12-05 21:21:52.136868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.283 [2024-12-05 21:21:52.136874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.283 [2024-12-05 21:21:52.136889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.283 qpair failed and we were unable to recover it. 00:28:44.283 [2024-12-05 21:21:52.146765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.283 [2024-12-05 21:21:52.146868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.283 [2024-12-05 21:21:52.146881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.283 [2024-12-05 21:21:52.146887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.283 [2024-12-05 21:21:52.146893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.283 [2024-12-05 21:21:52.146908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.283 qpair failed and we were unable to recover it. 00:28:44.283 [2024-12-05 21:21:52.156687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.283 [2024-12-05 21:21:52.156737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.283 [2024-12-05 21:21:52.156750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.283 [2024-12-05 21:21:52.156756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.283 [2024-12-05 21:21:52.156762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.283 [2024-12-05 21:21:52.156776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.283 qpair failed and we were unable to recover it. 00:28:44.283 [2024-12-05 21:21:52.166796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.283 [2024-12-05 21:21:52.166851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.283 [2024-12-05 21:21:52.166864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.283 [2024-12-05 21:21:52.166870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.283 [2024-12-05 21:21:52.166876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.283 [2024-12-05 21:21:52.166890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.283 qpair failed and we were unable to recover it. 00:28:44.283 [2024-12-05 21:21:52.176892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.283 [2024-12-05 21:21:52.177001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.283 [2024-12-05 21:21:52.177014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.283 [2024-12-05 21:21:52.177020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.283 [2024-12-05 21:21:52.177026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.283 [2024-12-05 21:21:52.177040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.283 qpair failed and we were unable to recover it. 00:28:44.283 [2024-12-05 21:21:52.186858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.283 [2024-12-05 21:21:52.186907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.283 [2024-12-05 21:21:52.186920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.283 [2024-12-05 21:21:52.186926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.283 [2024-12-05 21:21:52.186932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.283 [2024-12-05 21:21:52.186947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.283 qpair failed and we were unable to recover it. 00:28:44.283 [2024-12-05 21:21:52.196895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.284 [2024-12-05 21:21:52.196949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.284 [2024-12-05 21:21:52.196962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.284 [2024-12-05 21:21:52.196970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.284 [2024-12-05 21:21:52.196976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.284 [2024-12-05 21:21:52.196990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.284 qpair failed and we were unable to recover it. 00:28:44.284 [2024-12-05 21:21:52.206919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.284 [2024-12-05 21:21:52.206973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.284 [2024-12-05 21:21:52.206986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.284 [2024-12-05 21:21:52.206993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.284 [2024-12-05 21:21:52.206999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.284 [2024-12-05 21:21:52.207013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.284 qpair failed and we were unable to recover it. 00:28:44.284 [2024-12-05 21:21:52.216926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.284 [2024-12-05 21:21:52.217008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.284 [2024-12-05 21:21:52.217021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.284 [2024-12-05 21:21:52.217031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.284 [2024-12-05 21:21:52.217036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.284 [2024-12-05 21:21:52.217051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.284 qpair failed and we were unable to recover it. 00:28:44.284 [2024-12-05 21:21:52.226962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.284 [2024-12-05 21:21:52.227017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.284 [2024-12-05 21:21:52.227031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.284 [2024-12-05 21:21:52.227037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.284 [2024-12-05 21:21:52.227043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.284 [2024-12-05 21:21:52.227058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.284 qpair failed and we were unable to recover it. 00:28:44.284 [2024-12-05 21:21:52.236994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.284 [2024-12-05 21:21:52.237058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.284 [2024-12-05 21:21:52.237071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.284 [2024-12-05 21:21:52.237077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.284 [2024-12-05 21:21:52.237083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.284 [2024-12-05 21:21:52.237097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.284 qpair failed and we were unable to recover it. 00:28:44.284 [2024-12-05 21:21:52.247024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.284 [2024-12-05 21:21:52.247079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.284 [2024-12-05 21:21:52.247092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.284 [2024-12-05 21:21:52.247098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.284 [2024-12-05 21:21:52.247104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.284 [2024-12-05 21:21:52.247119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.284 qpair failed and we were unable to recover it. 00:28:44.284 [2024-12-05 21:21:52.257105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.284 [2024-12-05 21:21:52.257164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.284 [2024-12-05 21:21:52.257177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.284 [2024-12-05 21:21:52.257183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.284 [2024-12-05 21:21:52.257189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.284 [2024-12-05 21:21:52.257206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.284 qpair failed and we were unable to recover it. 00:28:44.284 [2024-12-05 21:21:52.267073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.284 [2024-12-05 21:21:52.267125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.284 [2024-12-05 21:21:52.267138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.284 [2024-12-05 21:21:52.267144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.284 [2024-12-05 21:21:52.267150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.284 [2024-12-05 21:21:52.267164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.284 qpair failed and we were unable to recover it. 00:28:44.284 [2024-12-05 21:21:52.277167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.284 [2024-12-05 21:21:52.277220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.284 [2024-12-05 21:21:52.277233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.284 [2024-12-05 21:21:52.277239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.284 [2024-12-05 21:21:52.277245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.284 [2024-12-05 21:21:52.277260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.284 qpair failed and we were unable to recover it. 00:28:44.284 [2024-12-05 21:21:52.287148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.284 [2024-12-05 21:21:52.287206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.284 [2024-12-05 21:21:52.287219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.284 [2024-12-05 21:21:52.287226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.284 [2024-12-05 21:21:52.287232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.284 [2024-12-05 21:21:52.287247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.284 qpair failed and we were unable to recover it. 00:28:44.284 [2024-12-05 21:21:52.297171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.284 [2024-12-05 21:21:52.297227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.284 [2024-12-05 21:21:52.297240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.284 [2024-12-05 21:21:52.297246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.284 [2024-12-05 21:21:52.297252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.284 [2024-12-05 21:21:52.297267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.284 qpair failed and we were unable to recover it. 00:28:44.284 [2024-12-05 21:21:52.307191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.284 [2024-12-05 21:21:52.307246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.284 [2024-12-05 21:21:52.307259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.284 [2024-12-05 21:21:52.307266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.284 [2024-12-05 21:21:52.307272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.284 [2024-12-05 21:21:52.307286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.284 qpair failed and we were unable to recover it. 00:28:44.284 [2024-12-05 21:21:52.317225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.284 [2024-12-05 21:21:52.317274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.284 [2024-12-05 21:21:52.317287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.284 [2024-12-05 21:21:52.317294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.284 [2024-12-05 21:21:52.317299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.284 [2024-12-05 21:21:52.317314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.284 qpair failed and we were unable to recover it. 00:28:44.284 [2024-12-05 21:21:52.327257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.285 [2024-12-05 21:21:52.327318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.285 [2024-12-05 21:21:52.327331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.285 [2024-12-05 21:21:52.327338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.285 [2024-12-05 21:21:52.327344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.285 [2024-12-05 21:21:52.327358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.285 qpair failed and we were unable to recover it. 00:28:44.285 [2024-12-05 21:21:52.337273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.285 [2024-12-05 21:21:52.337324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.285 [2024-12-05 21:21:52.337337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.285 [2024-12-05 21:21:52.337344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.285 [2024-12-05 21:21:52.337350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.285 [2024-12-05 21:21:52.337364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.285 qpair failed and we were unable to recover it. 00:28:44.285 [2024-12-05 21:21:52.347308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.285 [2024-12-05 21:21:52.347361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.285 [2024-12-05 21:21:52.347380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.285 [2024-12-05 21:21:52.347386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.285 [2024-12-05 21:21:52.347392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.285 [2024-12-05 21:21:52.347407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.285 qpair failed and we were unable to recover it. 00:28:44.285 [2024-12-05 21:21:52.357328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.285 [2024-12-05 21:21:52.357382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.285 [2024-12-05 21:21:52.357395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.285 [2024-12-05 21:21:52.357402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.285 [2024-12-05 21:21:52.357408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.285 [2024-12-05 21:21:52.357422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.285 qpair failed and we were unable to recover it. 00:28:44.285 [2024-12-05 21:21:52.367379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.285 [2024-12-05 21:21:52.367435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.285 [2024-12-05 21:21:52.367448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.285 [2024-12-05 21:21:52.367455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.285 [2024-12-05 21:21:52.367461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.285 [2024-12-05 21:21:52.367475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.285 qpair failed and we were unable to recover it. 00:28:44.285 [2024-12-05 21:21:52.377403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.285 [2024-12-05 21:21:52.377456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.285 [2024-12-05 21:21:52.377469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.285 [2024-12-05 21:21:52.377475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.285 [2024-12-05 21:21:52.377481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.285 [2024-12-05 21:21:52.377496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.285 qpair failed and we were unable to recover it. 00:28:44.612 [2024-12-05 21:21:52.387455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.612 [2024-12-05 21:21:52.387516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.612 [2024-12-05 21:21:52.387530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.612 [2024-12-05 21:21:52.387537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.612 [2024-12-05 21:21:52.387543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.612 [2024-12-05 21:21:52.387561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-12-05 21:21:52.397475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.612 [2024-12-05 21:21:52.397529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.612 [2024-12-05 21:21:52.397543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.612 [2024-12-05 21:21:52.397551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.612 [2024-12-05 21:21:52.397557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.612 [2024-12-05 21:21:52.397573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-12-05 21:21:52.407497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.612 [2024-12-05 21:21:52.407551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.612 [2024-12-05 21:21:52.407564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.612 [2024-12-05 21:21:52.407571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.612 [2024-12-05 21:21:52.407576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.612 [2024-12-05 21:21:52.407591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-12-05 21:21:52.417593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.612 [2024-12-05 21:21:52.417650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.612 [2024-12-05 21:21:52.417663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.612 [2024-12-05 21:21:52.417670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.612 [2024-12-05 21:21:52.417676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.612 [2024-12-05 21:21:52.417690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-12-05 21:21:52.427557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.613 [2024-12-05 21:21:52.427610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.613 [2024-12-05 21:21:52.427623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.613 [2024-12-05 21:21:52.427629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.613 [2024-12-05 21:21:52.427635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.613 [2024-12-05 21:21:52.427649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-12-05 21:21:52.437570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.613 [2024-12-05 21:21:52.437623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.613 [2024-12-05 21:21:52.437636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.613 [2024-12-05 21:21:52.437643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.613 [2024-12-05 21:21:52.437649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.613 [2024-12-05 21:21:52.437663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-12-05 21:21:52.447606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.613 [2024-12-05 21:21:52.447664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.613 [2024-12-05 21:21:52.447676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.613 [2024-12-05 21:21:52.447683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.613 [2024-12-05 21:21:52.447689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.613 [2024-12-05 21:21:52.447703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-12-05 21:21:52.457661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.613 [2024-12-05 21:21:52.457750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.613 [2024-12-05 21:21:52.457763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.613 [2024-12-05 21:21:52.457769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.613 [2024-12-05 21:21:52.457775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.613 [2024-12-05 21:21:52.457789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-12-05 21:21:52.467656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.613 [2024-12-05 21:21:52.467705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.613 [2024-12-05 21:21:52.467718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.613 [2024-12-05 21:21:52.467725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.613 [2024-12-05 21:21:52.467731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.613 [2024-12-05 21:21:52.467745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-12-05 21:21:52.477668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.613 [2024-12-05 21:21:52.477721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.613 [2024-12-05 21:21:52.477737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.613 [2024-12-05 21:21:52.477743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.613 [2024-12-05 21:21:52.477749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.613 [2024-12-05 21:21:52.477763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-12-05 21:21:52.487716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.613 [2024-12-05 21:21:52.487809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.613 [2024-12-05 21:21:52.487821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.613 [2024-12-05 21:21:52.487828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.613 [2024-12-05 21:21:52.487833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.613 [2024-12-05 21:21:52.487847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-12-05 21:21:52.497732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.613 [2024-12-05 21:21:52.497784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.613 [2024-12-05 21:21:52.497797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.613 [2024-12-05 21:21:52.497803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.613 [2024-12-05 21:21:52.497809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.613 [2024-12-05 21:21:52.497823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-12-05 21:21:52.507790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.613 [2024-12-05 21:21:52.507844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.613 [2024-12-05 21:21:52.507857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.613 [2024-12-05 21:21:52.507863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.613 [2024-12-05 21:21:52.507869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.613 [2024-12-05 21:21:52.507883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-12-05 21:21:52.517783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.613 [2024-12-05 21:21:52.517836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.613 [2024-12-05 21:21:52.517849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.613 [2024-12-05 21:21:52.517856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.613 [2024-12-05 21:21:52.517864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.613 [2024-12-05 21:21:52.517879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-12-05 21:21:52.527759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.613 [2024-12-05 21:21:52.527814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.613 [2024-12-05 21:21:52.527828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.613 [2024-12-05 21:21:52.527834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.613 [2024-12-05 21:21:52.527840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.613 [2024-12-05 21:21:52.527855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-12-05 21:21:52.537848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.613 [2024-12-05 21:21:52.537902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.613 [2024-12-05 21:21:52.537915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.613 [2024-12-05 21:21:52.537922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.613 [2024-12-05 21:21:52.537927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.613 [2024-12-05 21:21:52.537942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-12-05 21:21:52.547871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.613 [2024-12-05 21:21:52.547947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.613 [2024-12-05 21:21:52.547960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.613 [2024-12-05 21:21:52.547967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.613 [2024-12-05 21:21:52.547972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.613 [2024-12-05 21:21:52.547986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-12-05 21:21:52.557902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.614 [2024-12-05 21:21:52.557950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.614 [2024-12-05 21:21:52.557963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.614 [2024-12-05 21:21:52.557969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.614 [2024-12-05 21:21:52.557975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.614 [2024-12-05 21:21:52.557989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.614 qpair failed and we were unable to recover it. 00:28:44.614 [2024-12-05 21:21:52.567961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.614 [2024-12-05 21:21:52.568039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.614 [2024-12-05 21:21:52.568052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.614 [2024-12-05 21:21:52.568059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.614 [2024-12-05 21:21:52.568064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.614 [2024-12-05 21:21:52.568079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.614 qpair failed and we were unable to recover it. 00:28:44.614 [2024-12-05 21:21:52.577968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.614 [2024-12-05 21:21:52.578026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.614 [2024-12-05 21:21:52.578040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.614 [2024-12-05 21:21:52.578046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.614 [2024-12-05 21:21:52.578052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.614 [2024-12-05 21:21:52.578067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.614 qpair failed and we were unable to recover it. 00:28:44.614 [2024-12-05 21:21:52.587997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.614 [2024-12-05 21:21:52.588051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.614 [2024-12-05 21:21:52.588064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.614 [2024-12-05 21:21:52.588070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.614 [2024-12-05 21:21:52.588076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.614 [2024-12-05 21:21:52.588090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.614 qpair failed and we were unable to recover it. 00:28:44.614 [2024-12-05 21:21:52.598062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.614 [2024-12-05 21:21:52.598116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.614 [2024-12-05 21:21:52.598129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.614 [2024-12-05 21:21:52.598136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.614 [2024-12-05 21:21:52.598142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.614 [2024-12-05 21:21:52.598156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.614 qpair failed and we were unable to recover it. 00:28:44.614 [2024-12-05 21:21:52.608065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.614 [2024-12-05 21:21:52.608117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.614 [2024-12-05 21:21:52.608132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.614 [2024-12-05 21:21:52.608139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.614 [2024-12-05 21:21:52.608145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.614 [2024-12-05 21:21:52.608159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.614 qpair failed and we were unable to recover it. 00:28:44.614 [2024-12-05 21:21:52.618105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.614 [2024-12-05 21:21:52.618162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.614 [2024-12-05 21:21:52.618176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.614 [2024-12-05 21:21:52.618182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.614 [2024-12-05 21:21:52.618189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.614 [2024-12-05 21:21:52.618203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.614 qpair failed and we were unable to recover it. 00:28:44.614 [2024-12-05 21:21:52.628135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.614 [2024-12-05 21:21:52.628190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.614 [2024-12-05 21:21:52.628205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.614 [2024-12-05 21:21:52.628212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.614 [2024-12-05 21:21:52.628219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.614 [2024-12-05 21:21:52.628233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.614 qpair failed and we were unable to recover it. 00:28:44.614 [2024-12-05 21:21:52.638175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.614 [2024-12-05 21:21:52.638239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.614 [2024-12-05 21:21:52.638251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.614 [2024-12-05 21:21:52.638258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.614 [2024-12-05 21:21:52.638264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.614 [2024-12-05 21:21:52.638278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.614 qpair failed and we were unable to recover it. 00:28:44.614 [2024-12-05 21:21:52.648163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.614 [2024-12-05 21:21:52.648238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.614 [2024-12-05 21:21:52.648252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.614 [2024-12-05 21:21:52.648264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.614 [2024-12-05 21:21:52.648270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.614 [2024-12-05 21:21:52.648284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.614 qpair failed and we were unable to recover it. 00:28:44.614 [2024-12-05 21:21:52.658174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.614 [2024-12-05 21:21:52.658229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.614 [2024-12-05 21:21:52.658242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.614 [2024-12-05 21:21:52.658249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.614 [2024-12-05 21:21:52.658255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.614 [2024-12-05 21:21:52.658270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.614 qpair failed and we were unable to recover it. 00:28:44.614 [2024-12-05 21:21:52.668246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.614 [2024-12-05 21:21:52.668308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.614 [2024-12-05 21:21:52.668321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.614 [2024-12-05 21:21:52.668328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.614 [2024-12-05 21:21:52.668335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.614 [2024-12-05 21:21:52.668349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.614 qpair failed and we were unable to recover it. 00:28:44.614 [2024-12-05 21:21:52.678253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.614 [2024-12-05 21:21:52.678308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.614 [2024-12-05 21:21:52.678321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.614 [2024-12-05 21:21:52.678327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.614 [2024-12-05 21:21:52.678334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.614 [2024-12-05 21:21:52.678348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.614 qpair failed and we were unable to recover it. 00:28:44.614 [2024-12-05 21:21:52.688281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.614 [2024-12-05 21:21:52.688338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.615 [2024-12-05 21:21:52.688352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.615 [2024-12-05 21:21:52.688359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.615 [2024-12-05 21:21:52.688365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.615 [2024-12-05 21:21:52.688386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.615 qpair failed and we were unable to recover it. 00:28:44.615 [2024-12-05 21:21:52.698316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.615 [2024-12-05 21:21:52.698375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.615 [2024-12-05 21:21:52.698389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.615 [2024-12-05 21:21:52.698395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.615 [2024-12-05 21:21:52.698401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.615 [2024-12-05 21:21:52.698415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.615 qpair failed and we were unable to recover it. 00:28:44.615 [2024-12-05 21:21:52.708337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.615 [2024-12-05 21:21:52.708397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.615 [2024-12-05 21:21:52.708410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.615 [2024-12-05 21:21:52.708416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.615 [2024-12-05 21:21:52.708422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.615 [2024-12-05 21:21:52.708436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.615 qpair failed and we were unable to recover it. 00:28:44.901 [2024-12-05 21:21:52.718404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.901 [2024-12-05 21:21:52.718459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.901 [2024-12-05 21:21:52.718473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.901 [2024-12-05 21:21:52.718480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.901 [2024-12-05 21:21:52.718486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.901 [2024-12-05 21:21:52.718501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.901 qpair failed and we were unable to recover it. 00:28:44.901 [2024-12-05 21:21:52.728416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.901 [2024-12-05 21:21:52.728484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.901 [2024-12-05 21:21:52.728497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.901 [2024-12-05 21:21:52.728504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.901 [2024-12-05 21:21:52.728511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.901 [2024-12-05 21:21:52.728525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.901 qpair failed and we were unable to recover it. 00:28:44.901 [2024-12-05 21:21:52.738435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.901 [2024-12-05 21:21:52.738495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.901 [2024-12-05 21:21:52.738508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.901 [2024-12-05 21:21:52.738515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.901 [2024-12-05 21:21:52.738521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.901 [2024-12-05 21:21:52.738535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.901 qpair failed and we were unable to recover it. 00:28:44.901 [2024-12-05 21:21:52.748509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.901 [2024-12-05 21:21:52.748594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.901 [2024-12-05 21:21:52.748607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.901 [2024-12-05 21:21:52.748614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.901 [2024-12-05 21:21:52.748619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.901 [2024-12-05 21:21:52.748634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.901 qpair failed and we were unable to recover it. 00:28:44.901 [2024-12-05 21:21:52.758478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.901 [2024-12-05 21:21:52.758529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.901 [2024-12-05 21:21:52.758542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.901 [2024-12-05 21:21:52.758549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.901 [2024-12-05 21:21:52.758555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.901 [2024-12-05 21:21:52.758568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.901 qpair failed and we were unable to recover it. 00:28:44.901 [2024-12-05 21:21:52.768538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.901 [2024-12-05 21:21:52.768615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.901 [2024-12-05 21:21:52.768628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.901 [2024-12-05 21:21:52.768634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.901 [2024-12-05 21:21:52.768640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.901 [2024-12-05 21:21:52.768654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.901 qpair failed and we were unable to recover it. 00:28:44.901 [2024-12-05 21:21:52.778597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.901 [2024-12-05 21:21:52.778656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.901 [2024-12-05 21:21:52.778669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.901 [2024-12-05 21:21:52.778679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.901 [2024-12-05 21:21:52.778684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.901 [2024-12-05 21:21:52.778699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.901 qpair failed and we were unable to recover it. 00:28:44.901 [2024-12-05 21:21:52.788617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.901 [2024-12-05 21:21:52.788674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.901 [2024-12-05 21:21:52.788687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.901 [2024-12-05 21:21:52.788693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.901 [2024-12-05 21:21:52.788699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.901 [2024-12-05 21:21:52.788713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.901 qpair failed and we were unable to recover it. 00:28:44.901 [2024-12-05 21:21:52.798597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.901 [2024-12-05 21:21:52.798649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.902 [2024-12-05 21:21:52.798662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.902 [2024-12-05 21:21:52.798668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.902 [2024-12-05 21:21:52.798674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.902 [2024-12-05 21:21:52.798688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.902 qpair failed and we were unable to recover it. 00:28:44.902 [2024-12-05 21:21:52.808645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.902 [2024-12-05 21:21:52.808722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.902 [2024-12-05 21:21:52.808735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.902 [2024-12-05 21:21:52.808742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.902 [2024-12-05 21:21:52.808748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.902 [2024-12-05 21:21:52.808762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.902 qpair failed and we were unable to recover it. 00:28:44.902 [2024-12-05 21:21:52.818699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.902 [2024-12-05 21:21:52.818764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.902 [2024-12-05 21:21:52.818777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.902 [2024-12-05 21:21:52.818783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.902 [2024-12-05 21:21:52.818789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.902 [2024-12-05 21:21:52.818807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.902 qpair failed and we were unable to recover it. 00:28:44.902 [2024-12-05 21:21:52.828681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.902 [2024-12-05 21:21:52.828760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.902 [2024-12-05 21:21:52.828773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.902 [2024-12-05 21:21:52.828780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.902 [2024-12-05 21:21:52.828786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.902 [2024-12-05 21:21:52.828801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.902 qpair failed and we were unable to recover it. 00:28:44.902 [2024-12-05 21:21:52.838721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.902 [2024-12-05 21:21:52.838776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.902 [2024-12-05 21:21:52.838789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.902 [2024-12-05 21:21:52.838796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.902 [2024-12-05 21:21:52.838802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.902 [2024-12-05 21:21:52.838816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.902 qpair failed and we were unable to recover it. 00:28:44.902 [2024-12-05 21:21:52.848742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.902 [2024-12-05 21:21:52.848798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.902 [2024-12-05 21:21:52.848811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.902 [2024-12-05 21:21:52.848818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.902 [2024-12-05 21:21:52.848824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.902 [2024-12-05 21:21:52.848838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.902 qpair failed and we were unable to recover it. 00:28:44.902 [2024-12-05 21:21:52.858770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.902 [2024-12-05 21:21:52.858824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.902 [2024-12-05 21:21:52.858837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.902 [2024-12-05 21:21:52.858843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.902 [2024-12-05 21:21:52.858849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.902 [2024-12-05 21:21:52.858863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.902 qpair failed and we were unable to recover it. 00:28:44.902 [2024-12-05 21:21:52.868798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.902 [2024-12-05 21:21:52.868852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.902 [2024-12-05 21:21:52.868865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.902 [2024-12-05 21:21:52.868872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.902 [2024-12-05 21:21:52.868878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.902 [2024-12-05 21:21:52.868892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.902 qpair failed and we were unable to recover it. 00:28:44.902 [2024-12-05 21:21:52.878815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.902 [2024-12-05 21:21:52.878867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.902 [2024-12-05 21:21:52.878880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.902 [2024-12-05 21:21:52.878887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.902 [2024-12-05 21:21:52.878893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.902 [2024-12-05 21:21:52.878907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.902 qpair failed and we were unable to recover it. 00:28:44.902 [2024-12-05 21:21:52.888876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.902 [2024-12-05 21:21:52.888954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.902 [2024-12-05 21:21:52.888967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.902 [2024-12-05 21:21:52.888974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.902 [2024-12-05 21:21:52.888979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.902 [2024-12-05 21:21:52.888994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.902 qpair failed and we were unable to recover it. 00:28:44.902 [2024-12-05 21:21:52.898932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.902 [2024-12-05 21:21:52.898992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.902 [2024-12-05 21:21:52.899005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.902 [2024-12-05 21:21:52.899011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.902 [2024-12-05 21:21:52.899017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.902 [2024-12-05 21:21:52.899031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.902 qpair failed and we were unable to recover it. 00:28:44.902 [2024-12-05 21:21:52.908916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.902 [2024-12-05 21:21:52.908968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.902 [2024-12-05 21:21:52.908985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.902 [2024-12-05 21:21:52.908992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.902 [2024-12-05 21:21:52.908997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.902 [2024-12-05 21:21:52.909012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.902 qpair failed and we were unable to recover it. 00:28:44.902 [2024-12-05 21:21:52.918972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.902 [2024-12-05 21:21:52.919027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.902 [2024-12-05 21:21:52.919040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.902 [2024-12-05 21:21:52.919047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.902 [2024-12-05 21:21:52.919053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.902 [2024-12-05 21:21:52.919067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.902 qpair failed and we were unable to recover it. 00:28:44.902 [2024-12-05 21:21:52.929003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.902 [2024-12-05 21:21:52.929077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.903 [2024-12-05 21:21:52.929090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.903 [2024-12-05 21:21:52.929097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.903 [2024-12-05 21:21:52.929103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.903 [2024-12-05 21:21:52.929117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.903 qpair failed and we were unable to recover it. 00:28:44.903 [2024-12-05 21:21:52.939017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.903 [2024-12-05 21:21:52.939081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.903 [2024-12-05 21:21:52.939094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.903 [2024-12-05 21:21:52.939101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.903 [2024-12-05 21:21:52.939107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.903 [2024-12-05 21:21:52.939121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.903 qpair failed and we were unable to recover it. 00:28:44.903 [2024-12-05 21:21:52.949033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.903 [2024-12-05 21:21:52.949086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.903 [2024-12-05 21:21:52.949099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.903 [2024-12-05 21:21:52.949105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.903 [2024-12-05 21:21:52.949114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.903 [2024-12-05 21:21:52.949129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.903 qpair failed and we were unable to recover it. 00:28:44.903 [2024-12-05 21:21:52.959060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.903 [2024-12-05 21:21:52.959140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.903 [2024-12-05 21:21:52.959153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.903 [2024-12-05 21:21:52.959160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.903 [2024-12-05 21:21:52.959166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.903 [2024-12-05 21:21:52.959181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.903 qpair failed and we were unable to recover it. 00:28:44.903 [2024-12-05 21:21:52.969094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.903 [2024-12-05 21:21:52.969147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.903 [2024-12-05 21:21:52.969160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.903 [2024-12-05 21:21:52.969167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.903 [2024-12-05 21:21:52.969173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.903 [2024-12-05 21:21:52.969187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.903 qpair failed and we were unable to recover it. 00:28:44.903 [2024-12-05 21:21:52.979088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.903 [2024-12-05 21:21:52.979137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.903 [2024-12-05 21:21:52.979150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.903 [2024-12-05 21:21:52.979157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.903 [2024-12-05 21:21:52.979162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.903 [2024-12-05 21:21:52.979177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.903 qpair failed and we were unable to recover it. 00:28:44.903 [2024-12-05 21:21:52.989133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.903 [2024-12-05 21:21:52.989188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.903 [2024-12-05 21:21:52.989201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.903 [2024-12-05 21:21:52.989208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.903 [2024-12-05 21:21:52.989213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.903 [2024-12-05 21:21:52.989228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.903 qpair failed and we were unable to recover it. 00:28:44.903 [2024-12-05 21:21:52.999196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.903 [2024-12-05 21:21:52.999244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.903 [2024-12-05 21:21:52.999258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.903 [2024-12-05 21:21:52.999264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.903 [2024-12-05 21:21:52.999270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:44.903 [2024-12-05 21:21:52.999284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.903 qpair failed and we were unable to recover it. 00:28:45.162 [2024-12-05 21:21:53.009207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.162 [2024-12-05 21:21:53.009261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.162 [2024-12-05 21:21:53.009274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.162 [2024-12-05 21:21:53.009281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.162 [2024-12-05 21:21:53.009287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:45.162 [2024-12-05 21:21:53.009301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-12-05 21:21:53.019225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.162 [2024-12-05 21:21:53.019281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.162 [2024-12-05 21:21:53.019295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.162 [2024-12-05 21:21:53.019301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.162 [2024-12-05 21:21:53.019308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e0000b90 00:28:45.162 [2024-12-05 21:21:53.019321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-12-05 21:21:53.029290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.162 [2024-12-05 21:21:53.029412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.162 [2024-12-05 21:21:53.029469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.162 [2024-12-05 21:21:53.029494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.162 [2024-12-05 21:21:53.029515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e8000b90 00:28:45.162 [2024-12-05 21:21:53.029568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-12-05 21:21:53.039284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.162 [2024-12-05 21:21:53.039360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.162 [2024-12-05 21:21:53.039408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.163 [2024-12-05 21:21:53.039425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.163 [2024-12-05 21:21:53.039438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9e8000b90 00:28:45.163 [2024-12-05 21:21:53.039471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-12-05 21:21:53.039586] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:28:45.163 A controller has encountered a failure and is being reset. 00:28:45.163 Controller properly reset. 00:28:45.163 Initializing NVMe Controllers 00:28:45.163 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.163 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:45.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:45.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:45.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:45.163 Initialization complete. Launching workers. 00:28:45.163 Starting thread on core 1 00:28:45.163 Starting thread on core 2 00:28:45.163 Starting thread on core 3 00:28:45.163 Starting thread on core 0 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:45.163 00:28:45.163 real 0m11.284s 00:28:45.163 user 0m21.836s 00:28:45.163 sys 0m4.729s 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.163 ************************************ 00:28:45.163 END TEST nvmf_target_disconnect_tc2 00:28:45.163 ************************************ 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:45.163 rmmod nvme_tcp 00:28:45.163 rmmod nvme_fabrics 00:28:45.163 rmmod nvme_keyring 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1470870 ']' 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1470870 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1470870 ']' 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1470870 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:45.163 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1470870 00:28:45.422 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:28:45.422 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:28:45.422 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1470870' 00:28:45.422 killing process with pid 1470870 00:28:45.422 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1470870 00:28:45.422 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1470870 00:28:45.422 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:45.422 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:45.422 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:45.422 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:45.422 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:45.422 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:45.422 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:45.422 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:45.422 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:45.422 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.422 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.422 21:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.958 21:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:47.958 00:28:47.958 real 0m20.073s 00:28:47.958 user 0m49.203s 00:28:47.958 sys 0m9.620s 00:28:47.958 21:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.958 21:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:47.958 ************************************ 00:28:47.958 END TEST nvmf_target_disconnect 00:28:47.958 ************************************ 00:28:47.958 21:21:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:47.958 00:28:47.958 real 5m55.429s 00:28:47.958 user 10m40.769s 00:28:47.958 sys 1m58.489s 00:28:47.958 21:21:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.958 21:21:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.958 ************************************ 00:28:47.958 END TEST nvmf_host 00:28:47.958 ************************************ 00:28:47.958 21:21:55 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:47.958 21:21:55 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:47.958 21:21:55 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:47.958 21:21:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:47.958 21:21:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:47.958 21:21:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:47.958 ************************************ 00:28:47.958 START TEST nvmf_target_core_interrupt_mode 00:28:47.958 ************************************ 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:47.958 * Looking for test storage... 00:28:47.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:47.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.958 --rc genhtml_branch_coverage=1 00:28:47.958 --rc genhtml_function_coverage=1 00:28:47.958 --rc genhtml_legend=1 00:28:47.958 --rc geninfo_all_blocks=1 00:28:47.958 --rc geninfo_unexecuted_blocks=1 00:28:47.958 00:28:47.958 ' 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:47.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.958 --rc genhtml_branch_coverage=1 00:28:47.958 --rc genhtml_function_coverage=1 00:28:47.958 --rc genhtml_legend=1 00:28:47.958 --rc geninfo_all_blocks=1 00:28:47.958 --rc geninfo_unexecuted_blocks=1 00:28:47.958 00:28:47.958 ' 00:28:47.958 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:47.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.958 --rc genhtml_branch_coverage=1 00:28:47.958 --rc genhtml_function_coverage=1 00:28:47.958 --rc genhtml_legend=1 00:28:47.958 --rc geninfo_all_blocks=1 00:28:47.958 --rc geninfo_unexecuted_blocks=1 00:28:47.958 00:28:47.958 ' 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:47.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.959 --rc genhtml_branch_coverage=1 00:28:47.959 --rc genhtml_function_coverage=1 00:28:47.959 --rc genhtml_legend=1 00:28:47.959 --rc geninfo_all_blocks=1 00:28:47.959 --rc geninfo_unexecuted_blocks=1 00:28:47.959 00:28:47.959 ' 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:47.959 ************************************ 00:28:47.959 START TEST nvmf_abort 00:28:47.959 ************************************ 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:47.959 * Looking for test storage... 00:28:47.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:28:47.959 21:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:47.959 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:47.959 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:47.959 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:47.959 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:47.959 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:47.959 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:47.959 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:47.959 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:47.959 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:47.959 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:47.959 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:47.959 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:48.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.219 --rc genhtml_branch_coverage=1 00:28:48.219 --rc genhtml_function_coverage=1 00:28:48.219 --rc genhtml_legend=1 00:28:48.219 --rc geninfo_all_blocks=1 00:28:48.219 --rc geninfo_unexecuted_blocks=1 00:28:48.219 00:28:48.219 ' 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:48.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.219 --rc genhtml_branch_coverage=1 00:28:48.219 --rc genhtml_function_coverage=1 00:28:48.219 --rc genhtml_legend=1 00:28:48.219 --rc geninfo_all_blocks=1 00:28:48.219 --rc geninfo_unexecuted_blocks=1 00:28:48.219 00:28:48.219 ' 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:48.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.219 --rc genhtml_branch_coverage=1 00:28:48.219 --rc genhtml_function_coverage=1 00:28:48.219 --rc genhtml_legend=1 00:28:48.219 --rc geninfo_all_blocks=1 00:28:48.219 --rc geninfo_unexecuted_blocks=1 00:28:48.219 00:28:48.219 ' 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:48.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.219 --rc genhtml_branch_coverage=1 00:28:48.219 --rc genhtml_function_coverage=1 00:28:48.219 --rc genhtml_legend=1 00:28:48.219 --rc geninfo_all_blocks=1 00:28:48.219 --rc geninfo_unexecuted_blocks=1 00:28:48.219 00:28:48.219 ' 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.219 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:48.220 21:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:53.494 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:53.494 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.494 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:53.495 Found net devices under 0000:86:00.0: cvl_0_0 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:53.495 Found net devices under 0000:86:00.1: cvl_0_1 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:53.495 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:53.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:53.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:28:53.754 00:28:53.754 --- 10.0.0.2 ping statistics --- 00:28:53.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.754 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:53.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:53.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:28:53.754 00:28:53.754 --- 10.0.0.1 ping statistics --- 00:28:53.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.754 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1475419 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1475419 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1475419 ']' 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:53.754 21:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:54.013 [2024-12-05 21:22:01.895527] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:54.014 [2024-12-05 21:22:01.896489] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:28:54.014 [2024-12-05 21:22:01.896528] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.014 [2024-12-05 21:22:01.974543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:54.014 [2024-12-05 21:22:02.016692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.014 [2024-12-05 21:22:02.016726] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.014 [2024-12-05 21:22:02.016736] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.014 [2024-12-05 21:22:02.016742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.014 [2024-12-05 21:22:02.016747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.014 [2024-12-05 21:22:02.018185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.014 [2024-12-05 21:22:02.018289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.014 [2024-12-05 21:22:02.018290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:54.014 [2024-12-05 21:22:02.086789] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:54.014 [2024-12-05 21:22:02.087530] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:54.014 [2024-12-05 21:22:02.087609] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:54.014 [2024-12-05 21:22:02.087758] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:54.014 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:54.014 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:54.014 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:54.014 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:54.014 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:54.273 [2024-12-05 21:22:02.155006] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:54.273 Malloc0 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:54.273 Delay0 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:54.273 [2024-12-05 21:22:02.246911] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.273 21:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:54.533 [2024-12-05 21:22:02.417536] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:56.438 Initializing NVMe Controllers 00:28:56.438 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:56.438 controller IO queue size 128 less than required 00:28:56.438 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:56.438 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:56.438 Initialization complete. Launching workers. 00:28:56.438 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37746 00:28:56.438 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37803, failed to submit 66 00:28:56.438 success 37746, unsuccessful 57, failed 0 00:28:56.438 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:56.438 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.438 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.438 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.438 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:56.439 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:56.439 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:56.439 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:56.439 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:56.439 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:56.439 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:56.439 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:56.439 rmmod nvme_tcp 00:28:56.439 rmmod nvme_fabrics 00:28:56.439 rmmod nvme_keyring 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1475419 ']' 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1475419 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1475419 ']' 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1475419 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1475419 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1475419' 00:28:56.698 killing process with pid 1475419 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1475419 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1475419 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:56.698 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:56.957 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:56.957 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:56.957 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:56.957 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:56.957 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:56.957 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.957 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.957 21:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.865 21:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:58.865 00:28:58.865 real 0m10.975s 00:28:58.865 user 0m10.448s 00:28:58.865 sys 0m5.541s 00:28:58.865 21:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:58.865 21:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:58.865 ************************************ 00:28:58.865 END TEST nvmf_abort 00:28:58.865 ************************************ 00:28:58.865 21:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:58.865 21:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:58.865 21:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.865 21:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:58.865 ************************************ 00:28:58.865 START TEST nvmf_ns_hotplug_stress 00:28:58.865 ************************************ 00:28:58.865 21:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:59.124 * Looking for test storage... 00:28:59.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:59.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.124 --rc genhtml_branch_coverage=1 00:28:59.124 --rc genhtml_function_coverage=1 00:28:59.124 --rc genhtml_legend=1 00:28:59.124 --rc geninfo_all_blocks=1 00:28:59.124 --rc geninfo_unexecuted_blocks=1 00:28:59.124 00:28:59.124 ' 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:59.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.124 --rc genhtml_branch_coverage=1 00:28:59.124 --rc genhtml_function_coverage=1 00:28:59.124 --rc genhtml_legend=1 00:28:59.124 --rc geninfo_all_blocks=1 00:28:59.124 --rc geninfo_unexecuted_blocks=1 00:28:59.124 00:28:59.124 ' 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:59.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.124 --rc genhtml_branch_coverage=1 00:28:59.124 --rc genhtml_function_coverage=1 00:28:59.124 --rc genhtml_legend=1 00:28:59.124 --rc geninfo_all_blocks=1 00:28:59.124 --rc geninfo_unexecuted_blocks=1 00:28:59.124 00:28:59.124 ' 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:59.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.124 --rc genhtml_branch_coverage=1 00:28:59.124 --rc genhtml_function_coverage=1 00:28:59.124 --rc genhtml_legend=1 00:28:59.124 --rc geninfo_all_blocks=1 00:28:59.124 --rc geninfo_unexecuted_blocks=1 00:28:59.124 00:28:59.124 ' 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.124 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:59.125 21:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:05.683 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:05.683 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:05.683 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:05.684 Found net devices under 0000:86:00.0: cvl_0_0 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:05.684 Found net devices under 0000:86:00.1: cvl_0_1 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:05.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:05.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:29:05.684 00:29:05.684 --- 10.0.0.2 ping statistics --- 00:29:05.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.684 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:05.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:05.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:29:05.684 00:29:05.684 --- 10.0.0.1 ping statistics --- 00:29:05.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.684 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:05.684 21:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1479397 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1479397 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1479397 ']' 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:05.684 [2024-12-05 21:22:13.073502] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:05.684 [2024-12-05 21:22:13.074463] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:29:05.684 [2024-12-05 21:22:13.074503] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.684 [2024-12-05 21:22:13.152355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:05.684 [2024-12-05 21:22:13.193591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.684 [2024-12-05 21:22:13.193626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.684 [2024-12-05 21:22:13.193633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.684 [2024-12-05 21:22:13.193639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.684 [2024-12-05 21:22:13.193645] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.684 [2024-12-05 21:22:13.195056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:05.684 [2024-12-05 21:22:13.195159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.684 [2024-12-05 21:22:13.195160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:05.684 [2024-12-05 21:22:13.263678] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:05.684 [2024-12-05 21:22:13.264449] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:05.684 [2024-12-05 21:22:13.264633] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:05.684 [2024-12-05 21:22:13.264750] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:05.684 [2024-12-05 21:22:13.495839] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:05.684 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:05.942 [2024-12-05 21:22:13.884270] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.942 21:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:06.201 21:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:06.201 Malloc0 00:29:06.460 21:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:06.460 Delay0 00:29:06.460 21:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.717 21:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:06.975 NULL1 00:29:06.975 21:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:07.232 21:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1479745 00:29:07.232 21:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:07.232 21:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:07.232 21:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.164 Read completed with error (sct=0, sc=11) 00:29:08.164 21:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:08.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.422 21:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:08.422 21:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:08.680 true 00:29:08.680 21:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:08.680 21:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:09.615 21:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:09.615 21:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:09.615 21:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:09.874 true 00:29:09.874 21:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:09.874 21:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.132 21:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.391 21:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:10.391 21:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:10.650 true 00:29:10.650 21:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:10.650 21:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.587 21:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.846 21:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:11.846 21:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:11.846 true 00:29:11.846 21:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:11.846 21:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:12.104 21:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.363 21:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:12.363 21:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:12.622 true 00:29:12.622 21:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:12.622 21:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:13.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.559 21:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:13.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.818 21:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:13.818 21:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:14.077 true 00:29:14.077 21:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:14.077 21:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.014 21:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:15.014 21:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:15.014 21:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:15.273 true 00:29:15.273 21:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:15.273 21:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.532 21:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:15.791 21:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:15.791 21:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:15.791 true 00:29:15.791 21:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:15.791 21:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:17.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.167 21:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:17.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.167 21:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:17.167 21:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:17.423 true 00:29:17.423 21:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:17.423 21:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:18.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:18.357 21:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:18.357 21:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:18.357 21:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:18.614 true 00:29:18.614 21:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:18.614 21:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:18.872 21:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:19.130 21:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:19.130 21:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:19.130 true 00:29:19.130 21:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:19.130 21:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:20.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.504 21:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:20.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.504 21:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:20.504 21:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:20.762 true 00:29:20.762 21:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:20.762 21:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.696 21:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:21.696 21:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:21.696 21:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:21.954 true 00:29:21.955 21:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:21.955 21:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.213 21:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:22.213 21:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:22.213 21:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:22.472 true 00:29:22.472 21:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:22.472 21:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.857 21:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:23.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.857 21:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:23.857 21:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:24.115 true 00:29:24.115 21:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:24.115 21:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:25.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:25.048 21:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:25.048 21:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:25.048 21:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:25.306 true 00:29:25.306 21:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:25.306 21:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:25.306 21:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:25.563 21:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:25.563 21:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:25.821 true 00:29:25.821 21:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:25.821 21:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:27.199 21:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:27.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:27.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:27.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:27.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:27.199 21:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:27.199 21:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:27.456 true 00:29:27.456 21:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:27.456 21:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:28.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:28.389 21:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:28.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:28.389 21:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:28.389 21:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:28.647 true 00:29:28.647 21:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:28.647 21:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:28.905 21:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:29.163 21:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:29.163 21:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:29.163 true 00:29:29.163 21:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:29.163 21:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:30.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:30.537 21:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:30.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:30.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:30.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:30.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:30.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:30.537 21:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:30.537 21:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:30.795 true 00:29:30.795 21:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:30.795 21:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.727 21:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:31.727 21:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:31.727 21:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:31.985 true 00:29:31.985 21:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:31.985 21:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.244 21:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:32.502 21:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:32.502 21:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:32.502 true 00:29:32.502 21:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:32.502 21:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:33.881 21:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:33.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:33.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:33.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:33.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:33.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:33.881 21:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:33.881 21:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:34.140 true 00:29:34.140 21:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:34.140 21:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:34.967 21:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:34.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:34.967 21:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:34.967 21:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:35.226 true 00:29:35.226 21:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:35.226 21:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.485 21:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:35.744 21:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:35.744 21:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:35.744 true 00:29:35.744 21:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:35.744 21:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:37.122 21:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:37.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:37.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:37.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:37.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:37.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:37.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:37.122 21:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:37.122 21:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:37.381 true 00:29:37.381 21:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:37.381 21:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.316 21:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:38.316 Initializing NVMe Controllers 00:29:38.316 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:38.316 Controller IO queue size 128, less than required. 00:29:38.316 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:38.316 Controller IO queue size 128, less than required. 00:29:38.316 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:38.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:38.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:38.316 Initialization complete. Launching workers. 00:29:38.316 ======================================================== 00:29:38.316 Latency(us) 00:29:38.316 Device Information : IOPS MiB/s Average min max 00:29:38.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2282.57 1.11 40939.35 2144.19 1013954.26 00:29:38.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18597.80 9.08 6882.31 1559.96 369509.19 00:29:38.316 ======================================================== 00:29:38.316 Total : 20880.37 10.20 10605.31 1559.96 1013954.26 00:29:38.316 00:29:38.316 21:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:38.316 21:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:38.575 true 00:29:38.575 21:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1479745 00:29:38.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1479745) - No such process 00:29:38.575 21:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1479745 00:29:38.575 21:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.835 21:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:39.099 21:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:39.099 21:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:39.099 21:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:39.099 21:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:39.099 21:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:39.099 null0 00:29:39.099 21:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:39.099 21:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:39.099 21:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:39.398 null1 00:29:39.398 21:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:39.399 21:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:39.399 21:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:39.399 null2 00:29:39.399 21:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:39.399 21:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:39.399 21:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:39.679 null3 00:29:39.679 21:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:39.679 21:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:39.679 21:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:39.946 null4 00:29:39.946 21:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:39.946 21:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:39.946 21:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:39.946 null5 00:29:39.946 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:39.946 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:39.946 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:40.205 null6 00:29:40.205 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:40.205 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:40.205 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:40.465 null7 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:40.465 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1485218 1485220 1485221 1485223 1485225 1485227 1485229 1485230 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.466 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.726 21:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:40.985 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:40.985 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:40.985 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:40.985 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:40.985 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:40.985 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:40.985 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:40.985 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.244 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.245 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:41.245 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.245 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.245 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:41.503 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:41.504 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:41.504 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:41.504 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:41.504 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:41.504 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:41.504 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:41.504 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.762 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.762 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.762 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:41.762 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.762 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.762 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:41.762 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.762 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.762 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:41.762 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.762 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.762 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:41.762 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.762 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.762 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:41.762 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.763 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.763 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:41.763 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.763 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.763 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:41.763 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.763 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.763 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:41.763 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:41.763 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:41.763 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:41.763 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:41.763 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:41.763 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.763 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:41.763 21:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:42.021 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.021 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.021 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:42.021 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.021 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.021 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:42.021 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.021 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.021 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:42.021 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.021 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.021 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:42.021 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.022 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.022 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.022 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:42.022 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.022 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:42.022 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.022 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.022 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:42.022 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.022 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.022 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:42.280 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:42.280 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:42.280 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:42.280 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:42.280 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:42.280 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:42.280 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:42.280 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.540 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:42.541 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:42.541 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.541 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.541 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:42.541 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:42.541 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.800 21:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:43.059 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:43.059 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:43.059 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.059 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:43.059 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:43.059 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:43.059 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:43.059 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.318 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:43.577 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:43.577 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:43.577 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:43.577 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:43.577 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:43.577 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.577 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:43.577 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:43.577 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.577 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.577 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:43.577 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.577 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.577 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:43.836 21:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.095 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:44.354 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:44.354 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:44.354 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:44.354 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:44.354 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:44.354 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:44.354 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:44.354 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:44.613 rmmod nvme_tcp 00:29:44.613 rmmod nvme_fabrics 00:29:44.613 rmmod nvme_keyring 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1479397 ']' 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1479397 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1479397 ']' 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1479397 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1479397 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1479397' 00:29:44.613 killing process with pid 1479397 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1479397 00:29:44.613 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1479397 00:29:44.873 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:44.873 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:44.873 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:44.873 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:44.873 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:44.873 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:44.873 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:44.873 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:44.873 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:44.873 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.873 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.873 21:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.409 21:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:47.409 00:29:47.409 real 0m47.961s 00:29:47.409 user 2m59.528s 00:29:47.409 sys 0m20.031s 00:29:47.409 21:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:47.409 21:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:47.409 ************************************ 00:29:47.409 END TEST nvmf_ns_hotplug_stress 00:29:47.409 ************************************ 00:29:47.409 21:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:47.409 21:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:47.409 21:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:47.409 21:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:47.409 ************************************ 00:29:47.409 START TEST nvmf_delete_subsystem 00:29:47.409 ************************************ 00:29:47.409 21:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:47.409 * Looking for test storage... 00:29:47.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:47.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.409 --rc genhtml_branch_coverage=1 00:29:47.409 --rc genhtml_function_coverage=1 00:29:47.409 --rc genhtml_legend=1 00:29:47.409 --rc geninfo_all_blocks=1 00:29:47.409 --rc geninfo_unexecuted_blocks=1 00:29:47.409 00:29:47.409 ' 00:29:47.409 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:47.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.410 --rc genhtml_branch_coverage=1 00:29:47.410 --rc genhtml_function_coverage=1 00:29:47.410 --rc genhtml_legend=1 00:29:47.410 --rc geninfo_all_blocks=1 00:29:47.410 --rc geninfo_unexecuted_blocks=1 00:29:47.410 00:29:47.410 ' 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:47.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.410 --rc genhtml_branch_coverage=1 00:29:47.410 --rc genhtml_function_coverage=1 00:29:47.410 --rc genhtml_legend=1 00:29:47.410 --rc geninfo_all_blocks=1 00:29:47.410 --rc geninfo_unexecuted_blocks=1 00:29:47.410 00:29:47.410 ' 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:47.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.410 --rc genhtml_branch_coverage=1 00:29:47.410 --rc genhtml_function_coverage=1 00:29:47.410 --rc genhtml_legend=1 00:29:47.410 --rc geninfo_all_blocks=1 00:29:47.410 --rc geninfo_unexecuted_blocks=1 00:29:47.410 00:29:47.410 ' 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:47.410 21:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:52.673 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:52.673 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.673 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:52.932 Found net devices under 0000:86:00.0: cvl_0_0 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:52.932 Found net devices under 0000:86:00.1: cvl_0_1 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:52.932 21:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.932 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.932 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.932 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:52.932 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:52.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:29:52.932 00:29:52.932 --- 10.0.0.2 ping statistics --- 00:29:52.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.932 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:29:52.932 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:29:52.932 00:29:52.932 --- 10.0.0.1 ping statistics --- 00:29:52.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.932 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:29:53.190 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.190 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1489501 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1489501 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1489501 ']' 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.191 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.191 [2024-12-05 21:23:01.135275] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:53.191 [2024-12-05 21:23:01.136186] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:29:53.191 [2024-12-05 21:23:01.136222] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.191 [2024-12-05 21:23:01.213172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:53.191 [2024-12-05 21:23:01.253336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.191 [2024-12-05 21:23:01.253379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.191 [2024-12-05 21:23:01.253386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.191 [2024-12-05 21:23:01.253392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.191 [2024-12-05 21:23:01.253397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.191 [2024-12-05 21:23:01.254648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.191 [2024-12-05 21:23:01.254649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.448 [2024-12-05 21:23:01.323126] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:53.448 [2024-12-05 21:23:01.323685] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:53.448 [2024-12-05 21:23:01.323854] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.448 [2024-12-05 21:23:01.399317] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.448 [2024-12-05 21:23:01.427690] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.448 NULL1 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.448 Delay0 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.448 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1489613 00:29:53.449 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:53.449 21:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:53.449 [2024-12-05 21:23:01.543254] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:55.975 21:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:55.975 21:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.975 21:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 [2024-12-05 21:23:03.666366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fba4a0 is same with the state(6) to be set 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 starting I/O failed: -6 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 [2024-12-05 21:23:03.670174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc68400d4b0 is same with the state(6) to be set 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Write completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:55.975 Read completed with error (sct=0, sc=8) 00:29:56.542 [2024-12-05 21:23:04.639271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbb9b0 is same with the state(6) to be set 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 [2024-12-05 21:23:04.669524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fba680 is same with the state(6) to be set 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 [2024-12-05 21:23:04.669772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fba2c0 is same with the state(6) to be set 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 [2024-12-05 21:23:04.672226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc68400d7e0 is same with the state(6) to be set 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.801 Write completed with error (sct=0, sc=8) 00:29:56.801 Read completed with error (sct=0, sc=8) 00:29:56.802 Read completed with error (sct=0, sc=8) 00:29:56.802 Read completed with error (sct=0, sc=8) 00:29:56.802 Read completed with error (sct=0, sc=8) 00:29:56.802 Write completed with error (sct=0, sc=8) 00:29:56.802 Write completed with error (sct=0, sc=8) 00:29:56.802 Read completed with error (sct=0, sc=8) 00:29:56.802 Read completed with error (sct=0, sc=8) 00:29:56.802 Read completed with error (sct=0, sc=8) 00:29:56.802 Write completed with error (sct=0, sc=8) 00:29:56.802 Read completed with error (sct=0, sc=8) 00:29:56.802 Read completed with error (sct=0, sc=8) 00:29:56.802 Read completed with error (sct=0, sc=8) 00:29:56.802 [2024-12-05 21:23:04.672654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc68400d020 is same with the state(6) to be set 00:29:56.802 Initializing NVMe Controllers 00:29:56.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:56.802 Controller IO queue size 128, less than required. 00:29:56.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:56.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:56.802 Initialization complete. Launching workers. 00:29:56.802 ======================================================== 00:29:56.802 Latency(us) 00:29:56.802 Device Information : IOPS MiB/s Average min max 00:29:56.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.21 0.09 879275.87 317.05 1006201.22 00:29:56.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.26 0.08 920172.59 231.64 2000962.39 00:29:56.802 ======================================================== 00:29:56.802 Total : 343.47 0.17 899072.25 231.64 2000962.39 00:29:56.802 00:29:56.802 [2024-12-05 21:23:04.673183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbb9b0 (9): Bad file descriptor 00:29:56.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:56.802 21:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.802 21:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:56.802 21:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1489613 00:29:56.802 21:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1489613 00:29:57.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1489613) - No such process 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1489613 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1489613 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1489613 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:57.369 [2024-12-05 21:23:05.203558] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1490107 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1490107 00:29:57.369 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:57.369 [2024-12-05 21:23:05.286679] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:57.627 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:57.627 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1490107 00:29:57.627 21:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:58.194 21:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:58.194 21:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1490107 00:29:58.194 21:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:58.761 21:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:58.761 21:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1490107 00:29:58.761 21:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:59.327 21:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:59.327 21:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1490107 00:29:59.327 21:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:59.895 21:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:59.895 21:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1490107 00:29:59.895 21:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:00.153 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:00.153 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1490107 00:30:00.153 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:00.412 Initializing NVMe Controllers 00:30:00.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:00.412 Controller IO queue size 128, less than required. 00:30:00.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:00.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:00.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:00.412 Initialization complete. Launching workers. 00:30:00.412 ======================================================== 00:30:00.412 Latency(us) 00:30:00.412 Device Information : IOPS MiB/s Average min max 00:30:00.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003367.55 1000141.07 1041768.62 00:30:00.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003593.74 1000210.99 1041236.58 00:30:00.412 ======================================================== 00:30:00.412 Total : 256.00 0.12 1003480.65 1000141.07 1041768.62 00:30:00.412 00:30:00.670 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:00.670 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1490107 00:30:00.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1490107) - No such process 00:30:00.670 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1490107 00:30:00.670 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:00.670 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:00.670 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:00.670 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:00.670 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:00.670 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:00.670 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:00.670 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:00.670 rmmod nvme_tcp 00:30:00.929 rmmod nvme_fabrics 00:30:00.929 rmmod nvme_keyring 00:30:00.929 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:00.929 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:00.929 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:00.929 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1489501 ']' 00:30:00.929 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1489501 00:30:00.929 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1489501 ']' 00:30:00.929 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1489501 00:30:00.929 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:30:00.929 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:00.929 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1489501 00:30:00.930 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:00.930 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:00.930 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1489501' 00:30:00.930 killing process with pid 1489501 00:30:00.930 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1489501 00:30:00.930 21:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1489501 00:30:01.189 21:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:01.189 21:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:01.189 21:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:01.189 21:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:01.189 21:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:30:01.189 21:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:01.189 21:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:30:01.189 21:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:01.189 21:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:01.189 21:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.189 21:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.189 21:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.094 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:03.094 00:30:03.094 real 0m16.131s 00:30:03.094 user 0m26.108s 00:30:03.094 sys 0m6.028s 00:30:03.094 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:03.094 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:03.094 ************************************ 00:30:03.094 END TEST nvmf_delete_subsystem 00:30:03.094 ************************************ 00:30:03.094 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:03.094 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:03.094 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:03.094 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:03.094 ************************************ 00:30:03.094 START TEST nvmf_host_management 00:30:03.094 ************************************ 00:30:03.094 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:03.354 * Looking for test storage... 00:30:03.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:03.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.354 --rc genhtml_branch_coverage=1 00:30:03.354 --rc genhtml_function_coverage=1 00:30:03.354 --rc genhtml_legend=1 00:30:03.354 --rc geninfo_all_blocks=1 00:30:03.354 --rc geninfo_unexecuted_blocks=1 00:30:03.354 00:30:03.354 ' 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:03.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.354 --rc genhtml_branch_coverage=1 00:30:03.354 --rc genhtml_function_coverage=1 00:30:03.354 --rc genhtml_legend=1 00:30:03.354 --rc geninfo_all_blocks=1 00:30:03.354 --rc geninfo_unexecuted_blocks=1 00:30:03.354 00:30:03.354 ' 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:03.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.354 --rc genhtml_branch_coverage=1 00:30:03.354 --rc genhtml_function_coverage=1 00:30:03.354 --rc genhtml_legend=1 00:30:03.354 --rc geninfo_all_blocks=1 00:30:03.354 --rc geninfo_unexecuted_blocks=1 00:30:03.354 00:30:03.354 ' 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:03.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.354 --rc genhtml_branch_coverage=1 00:30:03.354 --rc genhtml_function_coverage=1 00:30:03.354 --rc genhtml_legend=1 00:30:03.354 --rc geninfo_all_blocks=1 00:30:03.354 --rc geninfo_unexecuted_blocks=1 00:30:03.354 00:30:03.354 ' 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:03.354 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:03.355 21:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:09.928 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.928 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:09.929 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:09.929 Found net devices under 0000:86:00.0: cvl_0_0 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:09.929 Found net devices under 0000:86:00.1: cvl_0_1 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:09.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:09.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:30:09.929 00:30:09.929 --- 10.0.0.2 ping statistics --- 00:30:09.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.929 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:09.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:09.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:30:09.929 00:30:09.929 --- 10.0.0.1 ping statistics --- 00:30:09.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.929 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1494286 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1494286 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1494286 ']' 00:30:09.929 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.930 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.930 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.930 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.930 21:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:09.930 [2024-12-05 21:23:17.381010] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:09.930 [2024-12-05 21:23:17.381926] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:30:09.930 [2024-12-05 21:23:17.381961] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.930 [2024-12-05 21:23:17.464041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:09.930 [2024-12-05 21:23:17.505217] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.930 [2024-12-05 21:23:17.505254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.930 [2024-12-05 21:23:17.505261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:09.930 [2024-12-05 21:23:17.505267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:09.930 [2024-12-05 21:23:17.505272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.930 [2024-12-05 21:23:17.506917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:09.930 [2024-12-05 21:23:17.507023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:09.930 [2024-12-05 21:23:17.507135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.930 [2024-12-05 21:23:17.507136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:09.930 [2024-12-05 21:23:17.575716] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:09.930 [2024-12-05 21:23:17.576480] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:09.930 [2024-12-05 21:23:17.576668] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:09.930 [2024-12-05 21:23:17.576826] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:09.930 [2024-12-05 21:23:17.576890] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:10.270 [2024-12-05 21:23:18.263912] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:10.270 Malloc0 00:30:10.270 [2024-12-05 21:23:18.348003] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:10.270 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:10.530 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1494421 00:30:10.530 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1494421 /var/tmp/bdevperf.sock 00:30:10.530 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1494421 ']' 00:30:10.530 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:10.530 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:10.530 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:10.530 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:10.530 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:10.530 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:10.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:10.530 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:10.530 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:10.530 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:10.530 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:10.530 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:10.530 { 00:30:10.530 "params": { 00:30:10.530 "name": "Nvme$subsystem", 00:30:10.530 "trtype": "$TEST_TRANSPORT", 00:30:10.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.530 "adrfam": "ipv4", 00:30:10.530 "trsvcid": "$NVMF_PORT", 00:30:10.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.530 "hdgst": ${hdgst:-false}, 00:30:10.530 "ddgst": ${ddgst:-false} 00:30:10.530 }, 00:30:10.530 "method": "bdev_nvme_attach_controller" 00:30:10.530 } 00:30:10.530 EOF 00:30:10.530 )") 00:30:10.530 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:10.530 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:10.530 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:10.530 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:10.530 "params": { 00:30:10.530 "name": "Nvme0", 00:30:10.530 "trtype": "tcp", 00:30:10.530 "traddr": "10.0.0.2", 00:30:10.530 "adrfam": "ipv4", 00:30:10.530 "trsvcid": "4420", 00:30:10.530 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:10.530 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:10.530 "hdgst": false, 00:30:10.530 "ddgst": false 00:30:10.530 }, 00:30:10.530 "method": "bdev_nvme_attach_controller" 00:30:10.530 }' 00:30:10.530 [2024-12-05 21:23:18.443852] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:30:10.530 [2024-12-05 21:23:18.443902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494421 ] 00:30:10.530 [2024-12-05 21:23:18.517739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.530 [2024-12-05 21:23:18.558848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.789 Running I/O for 10 seconds... 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:30:10.789 21:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:30:11.050 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:30:11.050 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:11.050 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:11.050 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:11.050 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.050 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:11.050 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.050 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:30:11.050 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:30:11.050 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:11.050 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:11.050 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:11.050 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:11.050 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.050 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:11.050 [2024-12-05 21:23:19.151976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.050 [2024-12-05 21:23:19.152390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.050 [2024-12-05 21:23:19.152398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.051 [2024-12-05 21:23:19.152878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.051 [2024-12-05 21:23:19.152885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.052 [2024-12-05 21:23:19.152893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.052 [2024-12-05 21:23:19.152900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.052 [2024-12-05 21:23:19.152909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.052 [2024-12-05 21:23:19.152916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.052 [2024-12-05 21:23:19.152924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.052 [2024-12-05 21:23:19.152931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.052 [2024-12-05 21:23:19.152940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.052 [2024-12-05 21:23:19.152946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.052 [2024-12-05 21:23:19.152954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.052 [2024-12-05 21:23:19.152961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.052 [2024-12-05 21:23:19.152970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.052 [2024-12-05 21:23:19.152977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.052 [2024-12-05 21:23:19.152985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.052 [2024-12-05 21:23:19.152991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.052 [2024-12-05 21:23:19.153000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.052 [2024-12-05 21:23:19.153007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.052 [2024-12-05 21:23:19.153015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.052 [2024-12-05 21:23:19.153023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.052 [2024-12-05 21:23:19.153988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:11.052 task offset: 104192 on job bdev=Nvme0n1 fails 00:30:11.052 00:30:11.052 Latency(us) 00:30:11.052 [2024-12-05T20:23:19.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.052 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:11.052 Job: Nvme0n1 ended in about 0.40 seconds with error 00:30:11.052 Verification LBA range: start 0x0 length 0x400 00:30:11.052 Nvme0n1 : 0.40 1930.25 120.64 160.85 0.00 29786.51 1552.58 26838.55 00:30:11.052 [2024-12-05T20:23:19.160Z] =================================================================================================================== 00:30:11.052 [2024-12-05T20:23:19.160Z] Total : 1930.25 120.64 160.85 0.00 29786.51 1552.58 26838.55 00:30:11.311 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.311 [2024-12-05 21:23:19.156353] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:11.311 [2024-12-05 21:23:19.156377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2102120 (9): Bad file descriptor 00:30:11.311 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:11.311 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.311 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:11.311 [2024-12-05 21:23:19.157364] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:30:11.311 [2024-12-05 21:23:19.157457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:11.311 [2024-12-05 21:23:19.157481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.311 [2024-12-05 21:23:19.157497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:30:11.311 [2024-12-05 21:23:19.157504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:30:11.311 [2024-12-05 21:23:19.157511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.311 [2024-12-05 21:23:19.157518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2102120 00:30:11.311 [2024-12-05 21:23:19.157537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2102120 (9): Bad file descriptor 00:30:11.311 [2024-12-05 21:23:19.157548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:11.311 [2024-12-05 21:23:19.157555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:11.311 [2024-12-05 21:23:19.157563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:11.311 [2024-12-05 21:23:19.157572] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:11.311 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.311 21:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:12.248 21:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1494421 00:30:12.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1494421) - No such process 00:30:12.248 21:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:12.248 21:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:12.248 21:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:12.248 21:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:12.248 21:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:12.248 21:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:12.248 21:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:12.248 21:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:12.248 { 00:30:12.248 "params": { 00:30:12.248 "name": "Nvme$subsystem", 00:30:12.248 "trtype": "$TEST_TRANSPORT", 00:30:12.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.248 "adrfam": "ipv4", 00:30:12.248 "trsvcid": "$NVMF_PORT", 00:30:12.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.248 "hdgst": ${hdgst:-false}, 00:30:12.248 "ddgst": ${ddgst:-false} 00:30:12.248 }, 00:30:12.248 "method": "bdev_nvme_attach_controller" 00:30:12.248 } 00:30:12.248 EOF 00:30:12.248 )") 00:30:12.248 21:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:12.248 21:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:12.248 21:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:12.248 21:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:12.248 "params": { 00:30:12.248 "name": "Nvme0", 00:30:12.248 "trtype": "tcp", 00:30:12.248 "traddr": "10.0.0.2", 00:30:12.248 "adrfam": "ipv4", 00:30:12.248 "trsvcid": "4420", 00:30:12.248 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:12.248 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:12.248 "hdgst": false, 00:30:12.248 "ddgst": false 00:30:12.248 }, 00:30:12.248 "method": "bdev_nvme_attach_controller" 00:30:12.248 }' 00:30:12.248 [2024-12-05 21:23:20.217324] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:30:12.248 [2024-12-05 21:23:20.217379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494809 ] 00:30:12.248 [2024-12-05 21:23:20.294339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.248 [2024-12-05 21:23:20.335106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.507 Running I/O for 1 seconds... 00:30:13.445 2123.00 IOPS, 132.69 MiB/s 00:30:13.445 Latency(us) 00:30:13.445 [2024-12-05T20:23:21.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.445 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.445 Verification LBA range: start 0x0 length 0x400 00:30:13.445 Nvme0n1 : 1.01 2168.28 135.52 0.00 0.00 28973.08 2543.42 26588.89 00:30:13.445 [2024-12-05T20:23:21.553Z] =================================================================================================================== 00:30:13.445 [2024-12-05T20:23:21.553Z] Total : 2168.28 135.52 0.00 0.00 28973.08 2543.42 26588.89 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:13.704 rmmod nvme_tcp 00:30:13.704 rmmod nvme_fabrics 00:30:13.704 rmmod nvme_keyring 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1494286 ']' 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1494286 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1494286 ']' 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1494286 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:13.704 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1494286 00:30:13.963 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:13.963 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:13.963 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1494286' 00:30:13.963 killing process with pid 1494286 00:30:13.963 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1494286 00:30:13.963 21:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1494286 00:30:13.963 [2024-12-05 21:23:22.005144] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:13.963 21:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:13.963 21:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:13.963 21:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:13.963 21:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:13.963 21:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:30:13.963 21:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:13.963 21:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:30:13.963 21:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:13.963 21:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:13.963 21:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.964 21:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.964 21:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:16.499 00:30:16.499 real 0m12.928s 00:30:16.499 user 0m17.725s 00:30:16.499 sys 0m6.369s 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:16.499 ************************************ 00:30:16.499 END TEST nvmf_host_management 00:30:16.499 ************************************ 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:16.499 ************************************ 00:30:16.499 START TEST nvmf_lvol 00:30:16.499 ************************************ 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:16.499 * Looking for test storage... 00:30:16.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:16.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.499 --rc genhtml_branch_coverage=1 00:30:16.499 --rc genhtml_function_coverage=1 00:30:16.499 --rc genhtml_legend=1 00:30:16.499 --rc geninfo_all_blocks=1 00:30:16.499 --rc geninfo_unexecuted_blocks=1 00:30:16.499 00:30:16.499 ' 00:30:16.499 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:16.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.499 --rc genhtml_branch_coverage=1 00:30:16.500 --rc genhtml_function_coverage=1 00:30:16.500 --rc genhtml_legend=1 00:30:16.500 --rc geninfo_all_blocks=1 00:30:16.500 --rc geninfo_unexecuted_blocks=1 00:30:16.500 00:30:16.500 ' 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:16.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.500 --rc genhtml_branch_coverage=1 00:30:16.500 --rc genhtml_function_coverage=1 00:30:16.500 --rc genhtml_legend=1 00:30:16.500 --rc geninfo_all_blocks=1 00:30:16.500 --rc geninfo_unexecuted_blocks=1 00:30:16.500 00:30:16.500 ' 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:16.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.500 --rc genhtml_branch_coverage=1 00:30:16.500 --rc genhtml_function_coverage=1 00:30:16.500 --rc genhtml_legend=1 00:30:16.500 --rc geninfo_all_blocks=1 00:30:16.500 --rc geninfo_unexecuted_blocks=1 00:30:16.500 00:30:16.500 ' 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:16.500 21:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.070 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:23.071 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.071 21:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:23.071 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:23.071 Found net devices under 0000:86:00.0: cvl_0_0 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:23.071 Found net devices under 0000:86:00.1: cvl_0_1 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:23.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:30:23.071 00:30:23.071 --- 10.0.0.2 ping statistics --- 00:30:23.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.071 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:30:23.071 00:30:23.071 --- 10.0.0.1 ping statistics --- 00:30:23.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.071 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1498553 00:30:23.071 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:23.072 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1498553 00:30:23.072 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1498553 ']' 00:30:23.072 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.072 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:23.072 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.072 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:23.072 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:23.072 [2024-12-05 21:23:30.362629] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:23.072 [2024-12-05 21:23:30.363602] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:30:23.072 [2024-12-05 21:23:30.363640] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.072 [2024-12-05 21:23:30.442833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:23.072 [2024-12-05 21:23:30.484510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.072 [2024-12-05 21:23:30.484543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.072 [2024-12-05 21:23:30.484551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:23.072 [2024-12-05 21:23:30.484556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:23.072 [2024-12-05 21:23:30.484561] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.072 [2024-12-05 21:23:30.485928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.072 [2024-12-05 21:23:30.486038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.072 [2024-12-05 21:23:30.486039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:23.072 [2024-12-05 21:23:30.554295] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:23.072 [2024-12-05 21:23:30.555102] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:23.072 [2024-12-05 21:23:30.555155] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:23.072 [2024-12-05 21:23:30.555315] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:23.072 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:23.072 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:30:23.072 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:23.072 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:23.072 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:23.072 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:23.072 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:23.072 [2024-12-05 21:23:30.786788] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:23.072 21:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:23.072 21:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:23.072 21:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:23.340 21:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:23.340 21:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:23.599 21:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:23.599 21:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=afdbd6d8-1a9b-439e-be93-c3f50e6edc50 00:30:23.599 21:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u afdbd6d8-1a9b-439e-be93-c3f50e6edc50 lvol 20 00:30:23.857 21:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f7704790-2357-49d7-a1a5-b5e47da7bb01 00:30:23.857 21:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:24.116 21:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f7704790-2357-49d7-a1a5-b5e47da7bb01 00:30:24.374 21:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:24.374 [2024-12-05 21:23:32.390695] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.374 21:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:24.633 21:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1498838 00:30:24.633 21:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:24.633 21:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:25.568 21:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f7704790-2357-49d7-a1a5-b5e47da7bb01 MY_SNAPSHOT 00:30:25.827 21:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=60610b5e-d713-418f-b93c-934a97892c6b 00:30:25.827 21:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f7704790-2357-49d7-a1a5-b5e47da7bb01 30 00:30:26.084 21:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 60610b5e-d713-418f-b93c-934a97892c6b MY_CLONE 00:30:26.343 21:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f34ac3b6-d2f8-41e7-832d-877021fac1cb 00:30:26.343 21:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f34ac3b6-d2f8-41e7-832d-877021fac1cb 00:30:26.910 21:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1498838 00:30:35.028 Initializing NVMe Controllers 00:30:35.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:35.028 Controller IO queue size 128, less than required. 00:30:35.028 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:35.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:35.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:35.028 Initialization complete. Launching workers. 00:30:35.028 ======================================================== 00:30:35.028 Latency(us) 00:30:35.028 Device Information : IOPS MiB/s Average min max 00:30:35.028 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12397.10 48.43 10329.99 1551.66 71481.51 00:30:35.028 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12599.40 49.22 10163.96 2168.96 64769.52 00:30:35.028 ======================================================== 00:30:35.028 Total : 24996.50 97.64 10246.30 1551.66 71481.51 00:30:35.028 00:30:35.028 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:35.287 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f7704790-2357-49d7-a1a5-b5e47da7bb01 00:30:35.545 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u afdbd6d8-1a9b-439e-be93-c3f50e6edc50 00:30:35.545 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:35.545 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:35.545 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:35.545 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:35.545 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:35.545 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:35.545 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:35.545 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:35.545 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:35.545 rmmod nvme_tcp 00:30:35.803 rmmod nvme_fabrics 00:30:35.803 rmmod nvme_keyring 00:30:35.803 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:35.803 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:35.803 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:35.803 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1498553 ']' 00:30:35.803 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1498553 00:30:35.803 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1498553 ']' 00:30:35.803 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1498553 00:30:35.803 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:35.803 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:35.803 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1498553 00:30:35.803 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:35.803 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:35.803 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1498553' 00:30:35.803 killing process with pid 1498553 00:30:35.803 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1498553 00:30:35.803 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1498553 00:30:36.062 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:36.062 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:36.062 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:36.062 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:36.062 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:36.062 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:36.062 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:36.062 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:36.062 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:36.062 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.062 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.062 21:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.962 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:37.962 00:30:37.962 real 0m21.858s 00:30:37.962 user 0m55.729s 00:30:37.962 sys 0m9.801s 00:30:37.962 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:37.962 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:37.962 ************************************ 00:30:37.962 END TEST nvmf_lvol 00:30:37.962 ************************************ 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:38.220 ************************************ 00:30:38.220 START TEST nvmf_lvs_grow 00:30:38.220 ************************************ 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:38.220 * Looking for test storage... 00:30:38.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:38.220 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:38.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.221 --rc genhtml_branch_coverage=1 00:30:38.221 --rc genhtml_function_coverage=1 00:30:38.221 --rc genhtml_legend=1 00:30:38.221 --rc geninfo_all_blocks=1 00:30:38.221 --rc geninfo_unexecuted_blocks=1 00:30:38.221 00:30:38.221 ' 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:38.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.221 --rc genhtml_branch_coverage=1 00:30:38.221 --rc genhtml_function_coverage=1 00:30:38.221 --rc genhtml_legend=1 00:30:38.221 --rc geninfo_all_blocks=1 00:30:38.221 --rc geninfo_unexecuted_blocks=1 00:30:38.221 00:30:38.221 ' 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:38.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.221 --rc genhtml_branch_coverage=1 00:30:38.221 --rc genhtml_function_coverage=1 00:30:38.221 --rc genhtml_legend=1 00:30:38.221 --rc geninfo_all_blocks=1 00:30:38.221 --rc geninfo_unexecuted_blocks=1 00:30:38.221 00:30:38.221 ' 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:38.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.221 --rc genhtml_branch_coverage=1 00:30:38.221 --rc genhtml_function_coverage=1 00:30:38.221 --rc genhtml_legend=1 00:30:38.221 --rc geninfo_all_blocks=1 00:30:38.221 --rc geninfo_unexecuted_blocks=1 00:30:38.221 00:30:38.221 ' 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.221 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:38.480 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:38.481 21:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.045 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:45.046 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:45.046 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:45.046 Found net devices under 0000:86:00.0: cvl_0_0 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.046 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:45.047 Found net devices under 0000:86:00.1: cvl_0_1 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.047 21:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.047 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.047 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.047 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.047 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.047 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.047 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.047 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.047 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:30:45.047 00:30:45.047 --- 10.0.0.2 ping statistics --- 00:30:45.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.047 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:30:45.047 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:30:45.047 00:30:45.047 --- 10.0.0.1 ping statistics --- 00:30:45.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.047 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:30:45.047 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.047 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:45.047 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:45.047 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.047 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:45.047 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:45.047 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.047 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:45.048 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:45.048 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:45.048 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:45.048 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:45.048 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:45.048 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1504188 00:30:45.048 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1504188 00:30:45.048 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:45.048 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1504188 ']' 00:30:45.048 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.048 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.048 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.048 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.048 21:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:45.048 [2024-12-05 21:23:52.297607] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:45.048 [2024-12-05 21:23:52.298512] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:30:45.048 [2024-12-05 21:23:52.298551] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.048 [2024-12-05 21:23:52.377306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.048 [2024-12-05 21:23:52.415949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.048 [2024-12-05 21:23:52.415984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.048 [2024-12-05 21:23:52.415991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.048 [2024-12-05 21:23:52.415997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.048 [2024-12-05 21:23:52.416002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.048 [2024-12-05 21:23:52.416580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.048 [2024-12-05 21:23:52.484344] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:45.048 [2024-12-05 21:23:52.484572] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:45.048 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.048 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:45.048 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:45.048 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:45.048 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:45.307 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.307 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:45.307 [2024-12-05 21:23:53.349228] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.307 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:45.307 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:45.307 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:45.307 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:45.564 ************************************ 00:30:45.564 START TEST lvs_grow_clean 00:30:45.564 ************************************ 00:30:45.564 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:45.564 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:45.564 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:45.564 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:45.564 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:45.564 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:45.564 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:45.564 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:45.564 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:45.564 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:45.564 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:45.564 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:45.822 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2f8f47f0-a35b-420c-aa65-d4823042e725 00:30:45.822 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:45.822 21:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f8f47f0-a35b-420c-aa65-d4823042e725 00:30:46.080 21:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:46.080 21:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:46.080 21:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2f8f47f0-a35b-420c-aa65-d4823042e725 lvol 150 00:30:46.339 21:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=80cee99c-27d0-4856-a8e2-916f54988640 00:30:46.339 21:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:46.339 21:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:46.339 [2024-12-05 21:23:54.392996] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:46.339 [2024-12-05 21:23:54.393125] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:46.339 true 00:30:46.339 21:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f8f47f0-a35b-420c-aa65-d4823042e725 00:30:46.339 21:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:46.598 21:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:46.598 21:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:46.856 21:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 80cee99c-27d0-4856-a8e2-916f54988640 00:30:47.116 21:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:47.116 [2024-12-05 21:23:55.177470] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:47.116 21:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:47.374 21:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1504690 00:30:47.374 21:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:47.374 21:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:47.374 21:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1504690 /var/tmp/bdevperf.sock 00:30:47.374 21:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1504690 ']' 00:30:47.374 21:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:47.374 21:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:47.374 21:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:47.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:47.374 21:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:47.374 21:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:47.374 [2024-12-05 21:23:55.421262] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:30:47.374 [2024-12-05 21:23:55.421313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1504690 ] 00:30:47.632 [2024-12-05 21:23:55.494974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.632 [2024-12-05 21:23:55.536991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:47.632 21:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:47.632 21:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:47.632 21:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:47.906 Nvme0n1 00:30:47.906 21:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:48.165 [ 00:30:48.165 { 00:30:48.165 "name": "Nvme0n1", 00:30:48.165 "aliases": [ 00:30:48.165 "80cee99c-27d0-4856-a8e2-916f54988640" 00:30:48.165 ], 00:30:48.165 "product_name": "NVMe disk", 00:30:48.165 "block_size": 4096, 00:30:48.165 "num_blocks": 38912, 00:30:48.165 "uuid": "80cee99c-27d0-4856-a8e2-916f54988640", 00:30:48.165 "numa_id": 1, 00:30:48.165 "assigned_rate_limits": { 00:30:48.165 "rw_ios_per_sec": 0, 00:30:48.165 "rw_mbytes_per_sec": 0, 00:30:48.165 "r_mbytes_per_sec": 0, 00:30:48.165 "w_mbytes_per_sec": 0 00:30:48.165 }, 00:30:48.165 "claimed": false, 00:30:48.165 "zoned": false, 00:30:48.165 "supported_io_types": { 00:30:48.165 "read": true, 00:30:48.165 "write": true, 00:30:48.165 "unmap": true, 00:30:48.165 "flush": true, 00:30:48.165 "reset": true, 00:30:48.165 "nvme_admin": true, 00:30:48.165 "nvme_io": true, 00:30:48.165 "nvme_io_md": false, 00:30:48.165 "write_zeroes": true, 00:30:48.165 "zcopy": false, 00:30:48.165 "get_zone_info": false, 00:30:48.165 "zone_management": false, 00:30:48.165 "zone_append": false, 00:30:48.165 "compare": true, 00:30:48.165 "compare_and_write": true, 00:30:48.165 "abort": true, 00:30:48.165 "seek_hole": false, 00:30:48.165 "seek_data": false, 00:30:48.165 "copy": true, 00:30:48.165 "nvme_iov_md": false 00:30:48.165 }, 00:30:48.165 "memory_domains": [ 00:30:48.165 { 00:30:48.165 "dma_device_id": "system", 00:30:48.165 "dma_device_type": 1 00:30:48.165 } 00:30:48.165 ], 00:30:48.165 "driver_specific": { 00:30:48.165 "nvme": [ 00:30:48.165 { 00:30:48.165 "trid": { 00:30:48.165 "trtype": "TCP", 00:30:48.165 "adrfam": "IPv4", 00:30:48.165 "traddr": "10.0.0.2", 00:30:48.165 "trsvcid": "4420", 00:30:48.165 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:48.165 }, 00:30:48.165 "ctrlr_data": { 00:30:48.165 "cntlid": 1, 00:30:48.165 "vendor_id": "0x8086", 00:30:48.165 "model_number": "SPDK bdev Controller", 00:30:48.165 "serial_number": "SPDK0", 00:30:48.165 "firmware_revision": "25.01", 00:30:48.165 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:48.165 "oacs": { 00:30:48.165 "security": 0, 00:30:48.165 "format": 0, 00:30:48.165 "firmware": 0, 00:30:48.165 "ns_manage": 0 00:30:48.165 }, 00:30:48.165 "multi_ctrlr": true, 00:30:48.165 "ana_reporting": false 00:30:48.165 }, 00:30:48.165 "vs": { 00:30:48.165 "nvme_version": "1.3" 00:30:48.165 }, 00:30:48.165 "ns_data": { 00:30:48.165 "id": 1, 00:30:48.165 "can_share": true 00:30:48.165 } 00:30:48.165 } 00:30:48.165 ], 00:30:48.165 "mp_policy": "active_passive" 00:30:48.165 } 00:30:48.165 } 00:30:48.165 ] 00:30:48.165 21:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1504825 00:30:48.165 21:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:48.165 21:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:48.165 Running I/O for 10 seconds... 00:30:49.542 Latency(us) 00:30:49.542 [2024-12-05T20:23:57.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:49.542 Nvme0n1 : 1.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:30:49.542 [2024-12-05T20:23:57.650Z] =================================================================================================================== 00:30:49.542 [2024-12-05T20:23:57.650Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:30:49.542 00:30:50.110 21:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2f8f47f0-a35b-420c-aa65-d4823042e725 00:30:50.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:50.369 Nvme0n1 : 2.00 23440.00 91.56 0.00 0.00 0.00 0.00 0.00 00:30:50.369 [2024-12-05T20:23:58.477Z] =================================================================================================================== 00:30:50.369 [2024-12-05T20:23:58.477Z] Total : 23440.00 91.56 0.00 0.00 0.00 0.00 0.00 00:30:50.369 00:30:50.369 true 00:30:50.369 21:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f8f47f0-a35b-420c-aa65-d4823042e725 00:30:50.369 21:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:50.629 21:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:50.629 21:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:50.629 21:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1504825 00:30:51.196 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:51.196 Nvme0n1 : 3.00 23500.67 91.80 0.00 0.00 0.00 0.00 0.00 00:30:51.196 [2024-12-05T20:23:59.304Z] =================================================================================================================== 00:30:51.196 [2024-12-05T20:23:59.304Z] Total : 23500.67 91.80 0.00 0.00 0.00 0.00 0.00 00:30:51.196 00:30:52.574 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:52.574 Nvme0n1 : 4.00 23562.75 92.04 0.00 0.00 0.00 0.00 0.00 00:30:52.574 [2024-12-05T20:24:00.682Z] =================================================================================================================== 00:30:52.574 [2024-12-05T20:24:00.682Z] Total : 23562.75 92.04 0.00 0.00 0.00 0.00 0.00 00:30:52.574 00:30:53.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:53.511 Nvme0n1 : 5.00 23600.00 92.19 0.00 0.00 0.00 0.00 0.00 00:30:53.511 [2024-12-05T20:24:01.619Z] =================================================================================================================== 00:30:53.511 [2024-12-05T20:24:01.619Z] Total : 23600.00 92.19 0.00 0.00 0.00 0.00 0.00 00:30:53.511 00:30:54.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:54.448 Nvme0n1 : 6.00 23667.17 92.45 0.00 0.00 0.00 0.00 0.00 00:30:54.448 [2024-12-05T20:24:02.556Z] =================================================================================================================== 00:30:54.448 [2024-12-05T20:24:02.556Z] Total : 23667.17 92.45 0.00 0.00 0.00 0.00 0.00 00:30:54.448 00:30:55.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:55.384 Nvme0n1 : 7.00 23715.14 92.64 0.00 0.00 0.00 0.00 0.00 00:30:55.384 [2024-12-05T20:24:03.492Z] =================================================================================================================== 00:30:55.384 [2024-12-05T20:24:03.492Z] Total : 23715.14 92.64 0.00 0.00 0.00 0.00 0.00 00:30:55.384 00:30:56.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:56.328 Nvme0n1 : 8.00 23751.12 92.78 0.00 0.00 0.00 0.00 0.00 00:30:56.328 [2024-12-05T20:24:04.436Z] =================================================================================================================== 00:30:56.328 [2024-12-05T20:24:04.436Z] Total : 23751.12 92.78 0.00 0.00 0.00 0.00 0.00 00:30:56.328 00:30:57.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:57.310 Nvme0n1 : 9.00 23750.89 92.78 0.00 0.00 0.00 0.00 0.00 00:30:57.310 [2024-12-05T20:24:05.418Z] =================================================================================================================== 00:30:57.310 [2024-12-05T20:24:05.418Z] Total : 23750.89 92.78 0.00 0.00 0.00 0.00 0.00 00:30:57.310 00:30:58.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:58.300 Nvme0n1 : 10.00 23776.10 92.88 0.00 0.00 0.00 0.00 0.00 00:30:58.300 [2024-12-05T20:24:06.408Z] =================================================================================================================== 00:30:58.300 [2024-12-05T20:24:06.408Z] Total : 23776.10 92.88 0.00 0.00 0.00 0.00 0.00 00:30:58.300 00:30:58.300 00:30:58.300 Latency(us) 00:30:58.300 [2024-12-05T20:24:06.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:58.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:58.300 Nvme0n1 : 10.00 23777.89 92.88 0.00 0.00 5380.13 3229.99 25715.08 00:30:58.300 [2024-12-05T20:24:06.408Z] =================================================================================================================== 00:30:58.300 [2024-12-05T20:24:06.408Z] Total : 23777.89 92.88 0.00 0.00 5380.13 3229.99 25715.08 00:30:58.300 { 00:30:58.300 "results": [ 00:30:58.300 { 00:30:58.300 "job": "Nvme0n1", 00:30:58.300 "core_mask": "0x2", 00:30:58.300 "workload": "randwrite", 00:30:58.300 "status": "finished", 00:30:58.300 "queue_depth": 128, 00:30:58.300 "io_size": 4096, 00:30:58.300 "runtime": 10.00463, 00:30:58.300 "iops": 23777.89083654268, 00:30:58.300 "mibps": 92.88238608024484, 00:30:58.300 "io_failed": 0, 00:30:58.300 "io_timeout": 0, 00:30:58.300 "avg_latency_us": 5380.127196353481, 00:30:58.300 "min_latency_us": 3229.9885714285715, 00:30:58.300 "max_latency_us": 25715.078095238096 00:30:58.300 } 00:30:58.300 ], 00:30:58.300 "core_count": 1 00:30:58.300 } 00:30:58.300 21:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1504690 00:30:58.300 21:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1504690 ']' 00:30:58.300 21:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1504690 00:30:58.300 21:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:58.300 21:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:58.300 21:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1504690 00:30:58.300 21:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:58.300 21:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:58.300 21:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1504690' 00:30:58.300 killing process with pid 1504690 00:30:58.300 21:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1504690 00:30:58.300 Received shutdown signal, test time was about 10.000000 seconds 00:30:58.300 00:30:58.300 Latency(us) 00:30:58.300 [2024-12-05T20:24:06.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:58.300 [2024-12-05T20:24:06.408Z] =================================================================================================================== 00:30:58.300 [2024-12-05T20:24:06.408Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:58.300 21:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1504690 00:30:58.559 21:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:58.819 21:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:59.078 21:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f8f47f0-a35b-420c-aa65-d4823042e725 00:30:59.078 21:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:59.078 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:59.078 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:59.078 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:59.337 [2024-12-05 21:24:07.305072] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:59.337 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f8f47f0-a35b-420c-aa65-d4823042e725 00:30:59.337 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:59.337 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f8f47f0-a35b-420c-aa65-d4823042e725 00:30:59.337 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:59.337 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:59.337 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:59.337 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:59.337 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:59.337 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:59.337 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:59.337 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:59.337 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f8f47f0-a35b-420c-aa65-d4823042e725 00:30:59.596 request: 00:30:59.596 { 00:30:59.596 "uuid": "2f8f47f0-a35b-420c-aa65-d4823042e725", 00:30:59.596 "method": "bdev_lvol_get_lvstores", 00:30:59.596 "req_id": 1 00:30:59.596 } 00:30:59.596 Got JSON-RPC error response 00:30:59.596 response: 00:30:59.596 { 00:30:59.596 "code": -19, 00:30:59.596 "message": "No such device" 00:30:59.596 } 00:30:59.596 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:59.596 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:59.596 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:59.596 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:59.596 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:59.855 aio_bdev 00:30:59.855 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 80cee99c-27d0-4856-a8e2-916f54988640 00:30:59.855 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=80cee99c-27d0-4856-a8e2-916f54988640 00:30:59.855 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:59.855 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:59.855 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:59.855 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:59.855 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:59.855 21:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 80cee99c-27d0-4856-a8e2-916f54988640 -t 2000 00:31:00.115 [ 00:31:00.115 { 00:31:00.115 "name": "80cee99c-27d0-4856-a8e2-916f54988640", 00:31:00.115 "aliases": [ 00:31:00.115 "lvs/lvol" 00:31:00.115 ], 00:31:00.115 "product_name": "Logical Volume", 00:31:00.115 "block_size": 4096, 00:31:00.115 "num_blocks": 38912, 00:31:00.115 "uuid": "80cee99c-27d0-4856-a8e2-916f54988640", 00:31:00.115 "assigned_rate_limits": { 00:31:00.115 "rw_ios_per_sec": 0, 00:31:00.115 "rw_mbytes_per_sec": 0, 00:31:00.115 "r_mbytes_per_sec": 0, 00:31:00.115 "w_mbytes_per_sec": 0 00:31:00.115 }, 00:31:00.115 "claimed": false, 00:31:00.115 "zoned": false, 00:31:00.115 "supported_io_types": { 00:31:00.115 "read": true, 00:31:00.115 "write": true, 00:31:00.115 "unmap": true, 00:31:00.115 "flush": false, 00:31:00.115 "reset": true, 00:31:00.115 "nvme_admin": false, 00:31:00.115 "nvme_io": false, 00:31:00.115 "nvme_io_md": false, 00:31:00.115 "write_zeroes": true, 00:31:00.115 "zcopy": false, 00:31:00.115 "get_zone_info": false, 00:31:00.115 "zone_management": false, 00:31:00.115 "zone_append": false, 00:31:00.115 "compare": false, 00:31:00.115 "compare_and_write": false, 00:31:00.115 "abort": false, 00:31:00.115 "seek_hole": true, 00:31:00.115 "seek_data": true, 00:31:00.115 "copy": false, 00:31:00.115 "nvme_iov_md": false 00:31:00.115 }, 00:31:00.115 "driver_specific": { 00:31:00.115 "lvol": { 00:31:00.115 "lvol_store_uuid": "2f8f47f0-a35b-420c-aa65-d4823042e725", 00:31:00.115 "base_bdev": "aio_bdev", 00:31:00.115 "thin_provision": false, 00:31:00.115 "num_allocated_clusters": 38, 00:31:00.115 "snapshot": false, 00:31:00.115 "clone": false, 00:31:00.115 "esnap_clone": false 00:31:00.115 } 00:31:00.115 } 00:31:00.115 } 00:31:00.115 ] 00:31:00.115 21:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:31:00.115 21:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f8f47f0-a35b-420c-aa65-d4823042e725 00:31:00.115 21:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:00.374 21:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:00.374 21:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f8f47f0-a35b-420c-aa65-d4823042e725 00:31:00.374 21:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:00.634 21:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:00.634 21:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 80cee99c-27d0-4856-a8e2-916f54988640 00:31:00.634 21:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2f8f47f0-a35b-420c-aa65-d4823042e725 00:31:00.893 21:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:01.152 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:01.152 00:31:01.152 real 0m15.753s 00:31:01.152 user 0m15.259s 00:31:01.152 sys 0m1.474s 00:31:01.152 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:01.152 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:01.152 ************************************ 00:31:01.152 END TEST lvs_grow_clean 00:31:01.152 ************************************ 00:31:01.152 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:01.152 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:01.152 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:01.152 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:01.152 ************************************ 00:31:01.152 START TEST lvs_grow_dirty 00:31:01.152 ************************************ 00:31:01.152 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:31:01.152 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:01.152 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:01.152 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:01.152 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:01.152 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:01.152 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:01.152 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:01.152 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:01.152 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:01.412 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:01.412 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:01.672 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6076ddfc-6d16-4e44-9d0f-6efe193b87b8 00:31:01.672 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6076ddfc-6d16-4e44-9d0f-6efe193b87b8 00:31:01.672 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:01.931 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:01.932 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:01.932 21:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6076ddfc-6d16-4e44-9d0f-6efe193b87b8 lvol 150 00:31:02.191 21:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1c57f9a5-05a9-44e2-9be7-a547e5555711 00:31:02.191 21:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:02.191 21:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:02.191 [2024-12-05 21:24:10.233003] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:02.191 [2024-12-05 21:24:10.233140] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:02.191 true 00:31:02.191 21:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6076ddfc-6d16-4e44-9d0f-6efe193b87b8 00:31:02.191 21:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:02.450 21:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:02.450 21:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:02.709 21:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1c57f9a5-05a9-44e2-9be7-a547e5555711 00:31:02.969 21:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:02.969 [2024-12-05 21:24:10.985392] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.969 21:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:03.228 21:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:03.228 21:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1507781 00:31:03.228 21:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:03.228 21:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1507781 /var/tmp/bdevperf.sock 00:31:03.228 21:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1507781 ']' 00:31:03.228 21:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:03.228 21:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:03.228 21:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:03.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:03.228 21:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:03.229 21:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:03.229 [2024-12-05 21:24:11.219032] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:31:03.229 [2024-12-05 21:24:11.219081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507781 ] 00:31:03.229 [2024-12-05 21:24:11.290585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.229 [2024-12-05 21:24:11.332479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.487 21:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:03.487 21:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:03.487 21:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:03.746 Nvme0n1 00:31:03.746 21:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:04.004 [ 00:31:04.004 { 00:31:04.004 "name": "Nvme0n1", 00:31:04.004 "aliases": [ 00:31:04.004 "1c57f9a5-05a9-44e2-9be7-a547e5555711" 00:31:04.004 ], 00:31:04.004 "product_name": "NVMe disk", 00:31:04.004 "block_size": 4096, 00:31:04.004 "num_blocks": 38912, 00:31:04.004 "uuid": "1c57f9a5-05a9-44e2-9be7-a547e5555711", 00:31:04.004 "numa_id": 1, 00:31:04.004 "assigned_rate_limits": { 00:31:04.004 "rw_ios_per_sec": 0, 00:31:04.004 "rw_mbytes_per_sec": 0, 00:31:04.004 "r_mbytes_per_sec": 0, 00:31:04.004 "w_mbytes_per_sec": 0 00:31:04.004 }, 00:31:04.004 "claimed": false, 00:31:04.004 "zoned": false, 00:31:04.004 "supported_io_types": { 00:31:04.004 "read": true, 00:31:04.004 "write": true, 00:31:04.004 "unmap": true, 00:31:04.004 "flush": true, 00:31:04.004 "reset": true, 00:31:04.004 "nvme_admin": true, 00:31:04.004 "nvme_io": true, 00:31:04.004 "nvme_io_md": false, 00:31:04.004 "write_zeroes": true, 00:31:04.004 "zcopy": false, 00:31:04.004 "get_zone_info": false, 00:31:04.004 "zone_management": false, 00:31:04.004 "zone_append": false, 00:31:04.004 "compare": true, 00:31:04.004 "compare_and_write": true, 00:31:04.004 "abort": true, 00:31:04.004 "seek_hole": false, 00:31:04.004 "seek_data": false, 00:31:04.004 "copy": true, 00:31:04.004 "nvme_iov_md": false 00:31:04.004 }, 00:31:04.004 "memory_domains": [ 00:31:04.004 { 00:31:04.004 "dma_device_id": "system", 00:31:04.004 "dma_device_type": 1 00:31:04.004 } 00:31:04.004 ], 00:31:04.004 "driver_specific": { 00:31:04.004 "nvme": [ 00:31:04.004 { 00:31:04.004 "trid": { 00:31:04.004 "trtype": "TCP", 00:31:04.004 "adrfam": "IPv4", 00:31:04.004 "traddr": "10.0.0.2", 00:31:04.004 "trsvcid": "4420", 00:31:04.004 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:04.004 }, 00:31:04.004 "ctrlr_data": { 00:31:04.004 "cntlid": 1, 00:31:04.004 "vendor_id": "0x8086", 00:31:04.004 "model_number": "SPDK bdev Controller", 00:31:04.004 "serial_number": "SPDK0", 00:31:04.004 "firmware_revision": "25.01", 00:31:04.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:04.004 "oacs": { 00:31:04.004 "security": 0, 00:31:04.004 "format": 0, 00:31:04.004 "firmware": 0, 00:31:04.004 "ns_manage": 0 00:31:04.004 }, 00:31:04.004 "multi_ctrlr": true, 00:31:04.004 "ana_reporting": false 00:31:04.004 }, 00:31:04.004 "vs": { 00:31:04.004 "nvme_version": "1.3" 00:31:04.004 }, 00:31:04.004 "ns_data": { 00:31:04.004 "id": 1, 00:31:04.004 "can_share": true 00:31:04.004 } 00:31:04.004 } 00:31:04.004 ], 00:31:04.004 "mp_policy": "active_passive" 00:31:04.004 } 00:31:04.004 } 00:31:04.004 ] 00:31:04.004 21:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1507842 00:31:04.004 21:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:04.004 21:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:04.004 Running I/O for 10 seconds... 00:31:04.941 Latency(us) 00:31:04.941 [2024-12-05T20:24:13.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:04.941 Nvme0n1 : 1.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:31:04.941 [2024-12-05T20:24:13.049Z] =================================================================================================================== 00:31:04.941 [2024-12-05T20:24:13.049Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:31:04.941 00:31:05.879 21:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6076ddfc-6d16-4e44-9d0f-6efe193b87b8 00:31:06.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:06.138 Nvme0n1 : 2.00 23558.50 92.03 0.00 0.00 0.00 0.00 0.00 00:31:06.138 [2024-12-05T20:24:14.246Z] =================================================================================================================== 00:31:06.138 [2024-12-05T20:24:14.246Z] Total : 23558.50 92.03 0.00 0.00 0.00 0.00 0.00 00:31:06.138 00:31:06.138 true 00:31:06.138 21:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6076ddfc-6d16-4e44-9d0f-6efe193b87b8 00:31:06.138 21:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:06.397 21:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:06.397 21:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:06.397 21:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1507842 00:31:06.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:06.965 Nvme0n1 : 3.00 23664.33 92.44 0.00 0.00 0.00 0.00 0.00 00:31:06.965 [2024-12-05T20:24:15.073Z] =================================================================================================================== 00:31:06.965 [2024-12-05T20:24:15.073Z] Total : 23664.33 92.44 0.00 0.00 0.00 0.00 0.00 00:31:06.965 00:31:07.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:07.898 Nvme0n1 : 4.00 23749.00 92.77 0.00 0.00 0.00 0.00 0.00 00:31:07.898 [2024-12-05T20:24:16.006Z] =================================================================================================================== 00:31:07.898 [2024-12-05T20:24:16.006Z] Total : 23749.00 92.77 0.00 0.00 0.00 0.00 0.00 00:31:07.898 00:31:09.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:09.277 Nvme0n1 : 5.00 23799.80 92.97 0.00 0.00 0.00 0.00 0.00 00:31:09.277 [2024-12-05T20:24:17.385Z] =================================================================================================================== 00:31:09.277 [2024-12-05T20:24:17.385Z] Total : 23799.80 92.97 0.00 0.00 0.00 0.00 0.00 00:31:09.277 00:31:10.214 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:10.214 Nvme0n1 : 6.00 23812.50 93.02 0.00 0.00 0.00 0.00 0.00 00:31:10.214 [2024-12-05T20:24:18.322Z] =================================================================================================================== 00:31:10.214 [2024-12-05T20:24:18.322Z] Total : 23812.50 93.02 0.00 0.00 0.00 0.00 0.00 00:31:10.214 00:31:11.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:11.151 Nvme0n1 : 7.00 23857.86 93.19 0.00 0.00 0.00 0.00 0.00 00:31:11.151 [2024-12-05T20:24:19.259Z] =================================================================================================================== 00:31:11.151 [2024-12-05T20:24:19.259Z] Total : 23857.86 93.19 0.00 0.00 0.00 0.00 0.00 00:31:11.151 00:31:12.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:12.090 Nvme0n1 : 8.00 23876.00 93.27 0.00 0.00 0.00 0.00 0.00 00:31:12.090 [2024-12-05T20:24:20.198Z] =================================================================================================================== 00:31:12.090 [2024-12-05T20:24:20.198Z] Total : 23876.00 93.27 0.00 0.00 0.00 0.00 0.00 00:31:12.090 00:31:13.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:13.029 Nvme0n1 : 9.00 23863.78 93.22 0.00 0.00 0.00 0.00 0.00 00:31:13.029 [2024-12-05T20:24:21.137Z] =================================================================================================================== 00:31:13.029 [2024-12-05T20:24:21.137Z] Total : 23863.78 93.22 0.00 0.00 0.00 0.00 0.00 00:31:13.029 00:31:14.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:14.067 Nvme0n1 : 10.00 23877.70 93.27 0.00 0.00 0.00 0.00 0.00 00:31:14.067 [2024-12-05T20:24:22.175Z] =================================================================================================================== 00:31:14.067 [2024-12-05T20:24:22.175Z] Total : 23877.70 93.27 0.00 0.00 0.00 0.00 0.00 00:31:14.067 00:31:14.067 00:31:14.067 Latency(us) 00:31:14.067 [2024-12-05T20:24:22.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:14.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:14.067 Nvme0n1 : 10.00 23880.79 93.28 0.00 0.00 5356.79 3214.38 25340.59 00:31:14.067 [2024-12-05T20:24:22.175Z] =================================================================================================================== 00:31:14.067 [2024-12-05T20:24:22.175Z] Total : 23880.79 93.28 0.00 0.00 5356.79 3214.38 25340.59 00:31:14.067 { 00:31:14.067 "results": [ 00:31:14.067 { 00:31:14.067 "job": "Nvme0n1", 00:31:14.067 "core_mask": "0x2", 00:31:14.067 "workload": "randwrite", 00:31:14.067 "status": "finished", 00:31:14.067 "queue_depth": 128, 00:31:14.067 "io_size": 4096, 00:31:14.067 "runtime": 10.004068, 00:31:14.067 "iops": 23880.785296541366, 00:31:14.067 "mibps": 93.28431756461471, 00:31:14.067 "io_failed": 0, 00:31:14.067 "io_timeout": 0, 00:31:14.067 "avg_latency_us": 5356.793197917881, 00:31:14.067 "min_latency_us": 3214.384761904762, 00:31:14.067 "max_latency_us": 25340.586666666666 00:31:14.067 } 00:31:14.067 ], 00:31:14.067 "core_count": 1 00:31:14.067 } 00:31:14.067 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1507781 00:31:14.067 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1507781 ']' 00:31:14.067 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1507781 00:31:14.067 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:31:14.067 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:14.067 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1507781 00:31:14.067 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:14.067 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:14.067 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1507781' 00:31:14.067 killing process with pid 1507781 00:31:14.067 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1507781 00:31:14.067 Received shutdown signal, test time was about 10.000000 seconds 00:31:14.067 00:31:14.067 Latency(us) 00:31:14.067 [2024-12-05T20:24:22.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:14.067 [2024-12-05T20:24:22.175Z] =================================================================================================================== 00:31:14.067 [2024-12-05T20:24:22.175Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:14.067 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1507781 00:31:14.352 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:14.352 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:14.611 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6076ddfc-6d16-4e44-9d0f-6efe193b87b8 00:31:14.611 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:14.870 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:14.870 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:14.870 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1504188 00:31:14.870 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1504188 00:31:14.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1504188 Killed "${NVMF_APP[@]}" "$@" 00:31:14.870 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:14.870 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:14.870 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:14.870 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:14.870 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:14.870 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1509649 00:31:14.870 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1509649 00:31:14.870 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:14.870 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1509649 ']' 00:31:14.870 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.870 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:14.870 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.870 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:14.870 21:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:14.870 [2024-12-05 21:24:22.949446] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:14.870 [2024-12-05 21:24:22.950313] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:31:14.870 [2024-12-05 21:24:22.950348] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.131 [2024-12-05 21:24:23.030225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.131 [2024-12-05 21:24:23.071514] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:15.131 [2024-12-05 21:24:23.071550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:15.131 [2024-12-05 21:24:23.071557] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:15.131 [2024-12-05 21:24:23.071563] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:15.131 [2024-12-05 21:24:23.071568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:15.131 [2024-12-05 21:24:23.072151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.131 [2024-12-05 21:24:23.141136] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:15.131 [2024-12-05 21:24:23.141355] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:15.699 21:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:15.699 21:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:15.699 21:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:15.699 21:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:15.699 21:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:15.959 21:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:15.959 21:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:15.959 [2024-12-05 21:24:23.993690] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:15.959 [2024-12-05 21:24:23.993904] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:15.959 [2024-12-05 21:24:23.993989] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:15.959 21:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:15.959 21:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1c57f9a5-05a9-44e2-9be7-a547e5555711 00:31:15.959 21:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1c57f9a5-05a9-44e2-9be7-a547e5555711 00:31:15.959 21:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:15.959 21:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:15.959 21:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:15.959 21:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:15.959 21:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:16.219 21:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1c57f9a5-05a9-44e2-9be7-a547e5555711 -t 2000 00:31:16.479 [ 00:31:16.479 { 00:31:16.479 "name": "1c57f9a5-05a9-44e2-9be7-a547e5555711", 00:31:16.479 "aliases": [ 00:31:16.479 "lvs/lvol" 00:31:16.479 ], 00:31:16.479 "product_name": "Logical Volume", 00:31:16.479 "block_size": 4096, 00:31:16.479 "num_blocks": 38912, 00:31:16.479 "uuid": "1c57f9a5-05a9-44e2-9be7-a547e5555711", 00:31:16.479 "assigned_rate_limits": { 00:31:16.479 "rw_ios_per_sec": 0, 00:31:16.479 "rw_mbytes_per_sec": 0, 00:31:16.479 "r_mbytes_per_sec": 0, 00:31:16.479 "w_mbytes_per_sec": 0 00:31:16.479 }, 00:31:16.479 "claimed": false, 00:31:16.479 "zoned": false, 00:31:16.479 "supported_io_types": { 00:31:16.479 "read": true, 00:31:16.479 "write": true, 00:31:16.479 "unmap": true, 00:31:16.479 "flush": false, 00:31:16.479 "reset": true, 00:31:16.479 "nvme_admin": false, 00:31:16.479 "nvme_io": false, 00:31:16.479 "nvme_io_md": false, 00:31:16.479 "write_zeroes": true, 00:31:16.479 "zcopy": false, 00:31:16.479 "get_zone_info": false, 00:31:16.479 "zone_management": false, 00:31:16.479 "zone_append": false, 00:31:16.479 "compare": false, 00:31:16.479 "compare_and_write": false, 00:31:16.479 "abort": false, 00:31:16.479 "seek_hole": true, 00:31:16.479 "seek_data": true, 00:31:16.479 "copy": false, 00:31:16.479 "nvme_iov_md": false 00:31:16.479 }, 00:31:16.479 "driver_specific": { 00:31:16.479 "lvol": { 00:31:16.479 "lvol_store_uuid": "6076ddfc-6d16-4e44-9d0f-6efe193b87b8", 00:31:16.479 "base_bdev": "aio_bdev", 00:31:16.479 "thin_provision": false, 00:31:16.479 "num_allocated_clusters": 38, 00:31:16.479 "snapshot": false, 00:31:16.479 "clone": false, 00:31:16.479 "esnap_clone": false 00:31:16.479 } 00:31:16.479 } 00:31:16.479 } 00:31:16.479 ] 00:31:16.479 21:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:16.479 21:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6076ddfc-6d16-4e44-9d0f-6efe193b87b8 00:31:16.479 21:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:16.738 21:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:16.738 21:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:16.738 21:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6076ddfc-6d16-4e44-9d0f-6efe193b87b8 00:31:16.738 21:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:16.738 21:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:16.997 [2024-12-05 21:24:24.972627] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:16.997 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6076ddfc-6d16-4e44-9d0f-6efe193b87b8 00:31:16.997 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:31:16.997 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6076ddfc-6d16-4e44-9d0f-6efe193b87b8 00:31:16.997 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:16.997 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:16.997 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:16.997 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:16.997 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:16.997 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:16.997 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:16.997 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:16.997 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6076ddfc-6d16-4e44-9d0f-6efe193b87b8 00:31:17.255 request: 00:31:17.255 { 00:31:17.255 "uuid": "6076ddfc-6d16-4e44-9d0f-6efe193b87b8", 00:31:17.255 "method": "bdev_lvol_get_lvstores", 00:31:17.255 "req_id": 1 00:31:17.255 } 00:31:17.255 Got JSON-RPC error response 00:31:17.255 response: 00:31:17.255 { 00:31:17.255 "code": -19, 00:31:17.255 "message": "No such device" 00:31:17.255 } 00:31:17.255 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:31:17.255 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:17.255 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:17.255 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:17.255 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:17.513 aio_bdev 00:31:17.513 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1c57f9a5-05a9-44e2-9be7-a547e5555711 00:31:17.513 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1c57f9a5-05a9-44e2-9be7-a547e5555711 00:31:17.513 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:17.513 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:17.513 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:17.513 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:17.513 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:17.513 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1c57f9a5-05a9-44e2-9be7-a547e5555711 -t 2000 00:31:17.770 [ 00:31:17.770 { 00:31:17.770 "name": "1c57f9a5-05a9-44e2-9be7-a547e5555711", 00:31:17.770 "aliases": [ 00:31:17.770 "lvs/lvol" 00:31:17.770 ], 00:31:17.770 "product_name": "Logical Volume", 00:31:17.770 "block_size": 4096, 00:31:17.770 "num_blocks": 38912, 00:31:17.770 "uuid": "1c57f9a5-05a9-44e2-9be7-a547e5555711", 00:31:17.770 "assigned_rate_limits": { 00:31:17.770 "rw_ios_per_sec": 0, 00:31:17.770 "rw_mbytes_per_sec": 0, 00:31:17.770 "r_mbytes_per_sec": 0, 00:31:17.770 "w_mbytes_per_sec": 0 00:31:17.770 }, 00:31:17.770 "claimed": false, 00:31:17.770 "zoned": false, 00:31:17.770 "supported_io_types": { 00:31:17.770 "read": true, 00:31:17.770 "write": true, 00:31:17.770 "unmap": true, 00:31:17.770 "flush": false, 00:31:17.770 "reset": true, 00:31:17.770 "nvme_admin": false, 00:31:17.770 "nvme_io": false, 00:31:17.770 "nvme_io_md": false, 00:31:17.770 "write_zeroes": true, 00:31:17.770 "zcopy": false, 00:31:17.770 "get_zone_info": false, 00:31:17.770 "zone_management": false, 00:31:17.770 "zone_append": false, 00:31:17.770 "compare": false, 00:31:17.770 "compare_and_write": false, 00:31:17.770 "abort": false, 00:31:17.770 "seek_hole": true, 00:31:17.770 "seek_data": true, 00:31:17.770 "copy": false, 00:31:17.770 "nvme_iov_md": false 00:31:17.770 }, 00:31:17.770 "driver_specific": { 00:31:17.770 "lvol": { 00:31:17.770 "lvol_store_uuid": "6076ddfc-6d16-4e44-9d0f-6efe193b87b8", 00:31:17.770 "base_bdev": "aio_bdev", 00:31:17.770 "thin_provision": false, 00:31:17.770 "num_allocated_clusters": 38, 00:31:17.770 "snapshot": false, 00:31:17.770 "clone": false, 00:31:17.770 "esnap_clone": false 00:31:17.770 } 00:31:17.770 } 00:31:17.770 } 00:31:17.770 ] 00:31:17.770 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:17.770 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6076ddfc-6d16-4e44-9d0f-6efe193b87b8 00:31:17.770 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:18.028 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:18.028 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6076ddfc-6d16-4e44-9d0f-6efe193b87b8 00:31:18.028 21:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:18.286 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:18.286 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1c57f9a5-05a9-44e2-9be7-a547e5555711 00:31:18.287 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6076ddfc-6d16-4e44-9d0f-6efe193b87b8 00:31:18.544 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:18.803 00:31:18.803 real 0m17.508s 00:31:18.803 user 0m34.487s 00:31:18.803 sys 0m3.764s 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:18.803 ************************************ 00:31:18.803 END TEST lvs_grow_dirty 00:31:18.803 ************************************ 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:18.803 nvmf_trace.0 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:18.803 rmmod nvme_tcp 00:31:18.803 rmmod nvme_fabrics 00:31:18.803 rmmod nvme_keyring 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1509649 ']' 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1509649 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1509649 ']' 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1509649 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:18.803 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1509649 00:31:19.061 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:19.061 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:19.061 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1509649' 00:31:19.061 killing process with pid 1509649 00:31:19.061 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1509649 00:31:19.061 21:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1509649 00:31:19.061 21:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:19.061 21:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:19.061 21:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:19.061 21:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:19.061 21:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:31:19.061 21:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:19.061 21:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:31:19.061 21:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:19.061 21:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:19.061 21:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.061 21:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:19.061 21:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.595 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:21.595 00:31:21.595 real 0m43.080s 00:31:21.595 user 0m52.469s 00:31:21.595 sys 0m10.121s 00:31:21.595 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:21.595 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:21.595 ************************************ 00:31:21.595 END TEST nvmf_lvs_grow 00:31:21.595 ************************************ 00:31:21.595 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:21.595 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:21.595 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:21.595 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:21.595 ************************************ 00:31:21.595 START TEST nvmf_bdev_io_wait 00:31:21.595 ************************************ 00:31:21.595 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:21.595 * Looking for test storage... 00:31:21.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:21.595 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:21.595 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:31:21.595 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:21.595 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:21.595 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:21.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.596 --rc genhtml_branch_coverage=1 00:31:21.596 --rc genhtml_function_coverage=1 00:31:21.596 --rc genhtml_legend=1 00:31:21.596 --rc geninfo_all_blocks=1 00:31:21.596 --rc geninfo_unexecuted_blocks=1 00:31:21.596 00:31:21.596 ' 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:21.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.596 --rc genhtml_branch_coverage=1 00:31:21.596 --rc genhtml_function_coverage=1 00:31:21.596 --rc genhtml_legend=1 00:31:21.596 --rc geninfo_all_blocks=1 00:31:21.596 --rc geninfo_unexecuted_blocks=1 00:31:21.596 00:31:21.596 ' 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:21.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.596 --rc genhtml_branch_coverage=1 00:31:21.596 --rc genhtml_function_coverage=1 00:31:21.596 --rc genhtml_legend=1 00:31:21.596 --rc geninfo_all_blocks=1 00:31:21.596 --rc geninfo_unexecuted_blocks=1 00:31:21.596 00:31:21.596 ' 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:21.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.596 --rc genhtml_branch_coverage=1 00:31:21.596 --rc genhtml_function_coverage=1 00:31:21.596 --rc genhtml_legend=1 00:31:21.596 --rc geninfo_all_blocks=1 00:31:21.596 --rc geninfo_unexecuted_blocks=1 00:31:21.596 00:31:21.596 ' 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:21.596 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:21.597 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:21.597 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:21.597 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:21.597 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:21.597 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:21.597 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:21.597 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:21.597 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.597 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:21.597 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.597 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:21.597 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:21.597 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:21.597 21:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:28.169 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:28.169 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:28.170 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:28.170 Found net devices under 0000:86:00.0: cvl_0_0 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:28.170 Found net devices under 0000:86:00.1: cvl_0_1 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:28.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:31:28.170 00:31:28.170 --- 10.0.0.2 ping statistics --- 00:31:28.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.170 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:31:28.170 00:31:28.170 --- 10.0.0.1 ping statistics --- 00:31:28.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.170 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1513911 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1513911 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1513911 ']' 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:28.170 21:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.170 [2024-12-05 21:24:35.478441] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:28.170 [2024-12-05 21:24:35.479426] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:31:28.170 [2024-12-05 21:24:35.479466] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.170 [2024-12-05 21:24:35.559377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:28.170 [2024-12-05 21:24:35.602506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.170 [2024-12-05 21:24:35.602541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.170 [2024-12-05 21:24:35.602548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:28.170 [2024-12-05 21:24:35.602554] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:28.170 [2024-12-05 21:24:35.602559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.170 [2024-12-05 21:24:35.603966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.170 [2024-12-05 21:24:35.604061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:28.170 [2024-12-05 21:24:35.604166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.170 [2024-12-05 21:24:35.604167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:28.170 [2024-12-05 21:24:35.604530] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.447 [2024-12-05 21:24:36.411845] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:28.447 [2024-12-05 21:24:36.412133] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:28.447 [2024-12-05 21:24:36.412431] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:28.447 [2024-12-05 21:24:36.412562] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.447 [2024-12-05 21:24:36.424952] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.447 Malloc0 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.447 [2024-12-05 21:24:36.497226] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1513976 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1513979 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:28.447 { 00:31:28.447 "params": { 00:31:28.447 "name": "Nvme$subsystem", 00:31:28.447 "trtype": "$TEST_TRANSPORT", 00:31:28.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.447 "adrfam": "ipv4", 00:31:28.447 "trsvcid": "$NVMF_PORT", 00:31:28.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.447 "hdgst": ${hdgst:-false}, 00:31:28.447 "ddgst": ${ddgst:-false} 00:31:28.447 }, 00:31:28.447 "method": "bdev_nvme_attach_controller" 00:31:28.447 } 00:31:28.447 EOF 00:31:28.447 )") 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:28.447 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1513982 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:28.448 { 00:31:28.448 "params": { 00:31:28.448 "name": "Nvme$subsystem", 00:31:28.448 "trtype": "$TEST_TRANSPORT", 00:31:28.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.448 "adrfam": "ipv4", 00:31:28.448 "trsvcid": "$NVMF_PORT", 00:31:28.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.448 "hdgst": ${hdgst:-false}, 00:31:28.448 "ddgst": ${ddgst:-false} 00:31:28.448 }, 00:31:28.448 "method": "bdev_nvme_attach_controller" 00:31:28.448 } 00:31:28.448 EOF 00:31:28.448 )") 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1513985 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:28.448 { 00:31:28.448 "params": { 00:31:28.448 "name": "Nvme$subsystem", 00:31:28.448 "trtype": "$TEST_TRANSPORT", 00:31:28.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.448 "adrfam": "ipv4", 00:31:28.448 "trsvcid": "$NVMF_PORT", 00:31:28.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.448 "hdgst": ${hdgst:-false}, 00:31:28.448 "ddgst": ${ddgst:-false} 00:31:28.448 }, 00:31:28.448 "method": "bdev_nvme_attach_controller" 00:31:28.448 } 00:31:28.448 EOF 00:31:28.448 )") 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:28.448 { 00:31:28.448 "params": { 00:31:28.448 "name": "Nvme$subsystem", 00:31:28.448 "trtype": "$TEST_TRANSPORT", 00:31:28.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.448 "adrfam": "ipv4", 00:31:28.448 "trsvcid": "$NVMF_PORT", 00:31:28.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.448 "hdgst": ${hdgst:-false}, 00:31:28.448 "ddgst": ${ddgst:-false} 00:31:28.448 }, 00:31:28.448 "method": "bdev_nvme_attach_controller" 00:31:28.448 } 00:31:28.448 EOF 00:31:28.448 )") 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1513976 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:28.448 "params": { 00:31:28.448 "name": "Nvme1", 00:31:28.448 "trtype": "tcp", 00:31:28.448 "traddr": "10.0.0.2", 00:31:28.448 "adrfam": "ipv4", 00:31:28.448 "trsvcid": "4420", 00:31:28.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:28.448 "hdgst": false, 00:31:28.448 "ddgst": false 00:31:28.448 }, 00:31:28.448 "method": "bdev_nvme_attach_controller" 00:31:28.448 }' 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:28.448 "params": { 00:31:28.448 "name": "Nvme1", 00:31:28.448 "trtype": "tcp", 00:31:28.448 "traddr": "10.0.0.2", 00:31:28.448 "adrfam": "ipv4", 00:31:28.448 "trsvcid": "4420", 00:31:28.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:28.448 "hdgst": false, 00:31:28.448 "ddgst": false 00:31:28.448 }, 00:31:28.448 "method": "bdev_nvme_attach_controller" 00:31:28.448 }' 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:28.448 "params": { 00:31:28.448 "name": "Nvme1", 00:31:28.448 "trtype": "tcp", 00:31:28.448 "traddr": "10.0.0.2", 00:31:28.448 "adrfam": "ipv4", 00:31:28.448 "trsvcid": "4420", 00:31:28.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:28.448 "hdgst": false, 00:31:28.448 "ddgst": false 00:31:28.448 }, 00:31:28.448 "method": "bdev_nvme_attach_controller" 00:31:28.448 }' 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:28.448 21:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:28.448 "params": { 00:31:28.448 "name": "Nvme1", 00:31:28.448 "trtype": "tcp", 00:31:28.448 "traddr": "10.0.0.2", 00:31:28.448 "adrfam": "ipv4", 00:31:28.448 "trsvcid": "4420", 00:31:28.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:28.448 "hdgst": false, 00:31:28.448 "ddgst": false 00:31:28.448 }, 00:31:28.448 "method": "bdev_nvme_attach_controller" 00:31:28.448 }' 00:31:28.448 [2024-12-05 21:24:36.550233] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:31:28.448 [2024-12-05 21:24:36.550290] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:28.448 [2024-12-05 21:24:36.550348] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:31:28.448 [2024-12-05 21:24:36.550347] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:31:28.448 [2024-12-05 21:24:36.550397] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 [2024-12-05 21:24:36.550398] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:31:28.448 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:31:28.448 [2024-12-05 21:24:36.552582] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:31:28.448 [2024-12-05 21:24:36.552633] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:31:28.706 [2024-12-05 21:24:36.748397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.706 [2024-12-05 21:24:36.790772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:28.965 [2024-12-05 21:24:36.840864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.965 [2024-12-05 21:24:36.888014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:28.965 [2024-12-05 21:24:36.905617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.965 [2024-12-05 21:24:36.945302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.965 [2024-12-05 21:24:36.948117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:28.965 [2024-12-05 21:24:36.987605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:28.965 Running I/O for 1 seconds... 00:31:29.223 Running I/O for 1 seconds... 00:31:29.223 Running I/O for 1 seconds... 00:31:29.223 Running I/O for 1 seconds... 00:31:30.157 9500.00 IOPS, 37.11 MiB/s 00:31:30.157 Latency(us) 00:31:30.157 [2024-12-05T20:24:38.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.157 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:30.157 Nvme1n1 : 1.02 9439.44 36.87 0.00 0.00 13409.83 3261.20 26838.55 00:31:30.157 [2024-12-05T20:24:38.265Z] =================================================================================================================== 00:31:30.157 [2024-12-05T20:24:38.265Z] Total : 9439.44 36.87 0.00 0.00 13409.83 3261.20 26838.55 00:31:30.157 242728.00 IOPS, 948.16 MiB/s [2024-12-05T20:24:38.265Z] 8176.00 IOPS, 31.94 MiB/s 00:31:30.157 Latency(us) 00:31:30.157 [2024-12-05T20:24:38.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.157 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:30.157 Nvme1n1 : 1.00 242352.36 946.69 0.00 0.00 525.16 226.26 1529.17 00:31:30.157 [2024-12-05T20:24:38.265Z] =================================================================================================================== 00:31:30.157 [2024-12-05T20:24:38.265Z] Total : 242352.36 946.69 0.00 0.00 525.16 226.26 1529.17 00:31:30.157 00:31:30.157 Latency(us) 00:31:30.157 [2024-12-05T20:24:38.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.157 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:30.157 Nvme1n1 : 1.01 8277.00 32.33 0.00 0.00 15419.58 4462.69 25715.08 00:31:30.157 [2024-12-05T20:24:38.265Z] =================================================================================================================== 00:31:30.157 [2024-12-05T20:24:38.265Z] Total : 8277.00 32.33 0.00 0.00 15419.58 4462.69 25715.08 00:31:30.157 11882.00 IOPS, 46.41 MiB/s 00:31:30.157 Latency(us) 00:31:30.157 [2024-12-05T20:24:38.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.157 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:30.157 Nvme1n1 : 1.01 11948.36 46.67 0.00 0.00 10682.22 4056.99 14979.66 00:31:30.157 [2024-12-05T20:24:38.265Z] =================================================================================================================== 00:31:30.157 [2024-12-05T20:24:38.265Z] Total : 11948.36 46.67 0.00 0.00 10682.22 4056.99 14979.66 00:31:30.416 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1513979 00:31:30.416 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1513982 00:31:30.416 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1513985 00:31:30.416 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:30.416 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.416 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:30.416 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.416 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:30.416 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:30.416 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:30.416 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:30.416 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:30.416 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:30.416 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:30.416 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:30.416 rmmod nvme_tcp 00:31:30.416 rmmod nvme_fabrics 00:31:30.416 rmmod nvme_keyring 00:31:30.416 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:30.416 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:30.416 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:30.417 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1513911 ']' 00:31:30.417 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1513911 00:31:30.417 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1513911 ']' 00:31:30.417 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1513911 00:31:30.417 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:31:30.417 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:30.417 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1513911 00:31:30.417 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:30.417 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:30.417 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1513911' 00:31:30.417 killing process with pid 1513911 00:31:30.417 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1513911 00:31:30.417 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1513911 00:31:30.676 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:30.676 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:30.676 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:30.676 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:30.676 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:31:30.676 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:31:30.676 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:30.676 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:30.676 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:30.676 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.676 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:30.676 21:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.213 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:33.213 00:31:33.213 real 0m11.435s 00:31:33.213 user 0m15.277s 00:31:33.213 sys 0m6.436s 00:31:33.213 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:33.213 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:33.213 ************************************ 00:31:33.213 END TEST nvmf_bdev_io_wait 00:31:33.213 ************************************ 00:31:33.213 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:33.214 ************************************ 00:31:33.214 START TEST nvmf_queue_depth 00:31:33.214 ************************************ 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:33.214 * Looking for test storage... 00:31:33.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:33.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.214 --rc genhtml_branch_coverage=1 00:31:33.214 --rc genhtml_function_coverage=1 00:31:33.214 --rc genhtml_legend=1 00:31:33.214 --rc geninfo_all_blocks=1 00:31:33.214 --rc geninfo_unexecuted_blocks=1 00:31:33.214 00:31:33.214 ' 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:33.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.214 --rc genhtml_branch_coverage=1 00:31:33.214 --rc genhtml_function_coverage=1 00:31:33.214 --rc genhtml_legend=1 00:31:33.214 --rc geninfo_all_blocks=1 00:31:33.214 --rc geninfo_unexecuted_blocks=1 00:31:33.214 00:31:33.214 ' 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:33.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.214 --rc genhtml_branch_coverage=1 00:31:33.214 --rc genhtml_function_coverage=1 00:31:33.214 --rc genhtml_legend=1 00:31:33.214 --rc geninfo_all_blocks=1 00:31:33.214 --rc geninfo_unexecuted_blocks=1 00:31:33.214 00:31:33.214 ' 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:33.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.214 --rc genhtml_branch_coverage=1 00:31:33.214 --rc genhtml_function_coverage=1 00:31:33.214 --rc genhtml_legend=1 00:31:33.214 --rc geninfo_all_blocks=1 00:31:33.214 --rc geninfo_unexecuted_blocks=1 00:31:33.214 00:31:33.214 ' 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.214 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.215 21:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:33.215 21:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:33.215 21:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:33.215 21:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.215 21:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.215 21:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.215 21:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:33.215 21:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:33.215 21:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:33.215 21:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:38.490 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:38.490 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:38.490 Found net devices under 0000:86:00.0: cvl_0_0 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:38.490 Found net devices under 0000:86:00.1: cvl_0_1 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:38.490 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:38.491 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:38.491 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:38.491 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:38.750 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:38.750 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:38.750 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:38.750 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:38.750 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:38.750 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:38.750 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:38.750 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:38.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:38.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:31:38.750 00:31:38.750 --- 10.0.0.2 ping statistics --- 00:31:38.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.750 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:31:38.750 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:38.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:38.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:31:38.750 00:31:38.750 --- 10.0.0.1 ping statistics --- 00:31:38.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.750 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:31:38.750 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:38.750 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:31:38.750 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:38.750 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:38.750 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:38.751 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:38.751 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:38.751 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:38.751 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:38.751 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:38.751 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:38.751 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:38.751 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:38.751 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1517806 00:31:38.751 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1517806 00:31:38.751 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:38.751 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1517806 ']' 00:31:38.751 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.751 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:38.751 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.751 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:38.751 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:39.010 [2024-12-05 21:24:46.894267] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:39.010 [2024-12-05 21:24:46.895251] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:31:39.010 [2024-12-05 21:24:46.895289] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:39.010 [2024-12-05 21:24:46.975011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.010 [2024-12-05 21:24:47.016392] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:39.010 [2024-12-05 21:24:47.016429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:39.010 [2024-12-05 21:24:47.016437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:39.010 [2024-12-05 21:24:47.016443] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:39.010 [2024-12-05 21:24:47.016449] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:39.010 [2024-12-05 21:24:47.016998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.010 [2024-12-05 21:24:47.084286] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:39.010 [2024-12-05 21:24:47.084511] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:39.010 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:39.010 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:39.010 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:39.010 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:39.010 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:39.269 [2024-12-05 21:24:47.157757] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:39.269 Malloc0 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:39.269 [2024-12-05 21:24:47.233701] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1517967 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:39.269 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1517967 /var/tmp/bdevperf.sock 00:31:39.270 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1517967 ']' 00:31:39.270 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:39.270 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:39.270 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:39.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:39.270 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:39.270 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:39.270 [2024-12-05 21:24:47.284780] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:31:39.270 [2024-12-05 21:24:47.284823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517967 ] 00:31:39.270 [2024-12-05 21:24:47.357540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.528 [2024-12-05 21:24:47.398996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.528 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:39.528 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:39.528 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:39.528 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.528 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:39.528 NVMe0n1 00:31:39.528 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.528 21:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:39.787 Running I/O for 10 seconds... 00:31:41.661 12214.00 IOPS, 47.71 MiB/s [2024-12-05T20:24:51.147Z] 12288.50 IOPS, 48.00 MiB/s [2024-12-05T20:24:51.715Z] 12308.67 IOPS, 48.08 MiB/s [2024-12-05T20:24:53.095Z] 12450.25 IOPS, 48.63 MiB/s [2024-12-05T20:24:54.032Z] 12471.00 IOPS, 48.71 MiB/s [2024-12-05T20:24:54.969Z] 12488.83 IOPS, 48.78 MiB/s [2024-12-05T20:24:55.905Z] 12546.14 IOPS, 49.01 MiB/s [2024-12-05T20:24:56.839Z] 12552.00 IOPS, 49.03 MiB/s [2024-12-05T20:24:57.775Z] 12561.44 IOPS, 49.07 MiB/s [2024-12-05T20:24:58.035Z] 12589.90 IOPS, 49.18 MiB/s 00:31:49.927 Latency(us) 00:31:49.927 [2024-12-05T20:24:58.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.927 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:49.927 Verification LBA range: start 0x0 length 0x4000 00:31:49.927 NVMe0n1 : 10.06 12617.85 49.29 0.00 0.00 80891.16 18724.57 51679.82 00:31:49.927 [2024-12-05T20:24:58.035Z] =================================================================================================================== 00:31:49.927 [2024-12-05T20:24:58.035Z] Total : 12617.85 49.29 0.00 0.00 80891.16 18724.57 51679.82 00:31:49.927 { 00:31:49.927 "results": [ 00:31:49.927 { 00:31:49.927 "job": "NVMe0n1", 00:31:49.927 "core_mask": "0x1", 00:31:49.927 "workload": "verify", 00:31:49.927 "status": "finished", 00:31:49.927 "verify_range": { 00:31:49.927 "start": 0, 00:31:49.927 "length": 16384 00:31:49.927 }, 00:31:49.927 "queue_depth": 1024, 00:31:49.927 "io_size": 4096, 00:31:49.927 "runtime": 10.059006, 00:31:49.927 "iops": 12617.847131217537, 00:31:49.927 "mibps": 49.288465356318504, 00:31:49.927 "io_failed": 0, 00:31:49.927 "io_timeout": 0, 00:31:49.927 "avg_latency_us": 80891.15716446005, 00:31:49.927 "min_latency_us": 18724.571428571428, 00:31:49.927 "max_latency_us": 51679.817142857144 00:31:49.927 } 00:31:49.927 ], 00:31:49.927 "core_count": 1 00:31:49.927 } 00:31:49.927 21:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1517967 00:31:49.927 21:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1517967 ']' 00:31:49.927 21:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1517967 00:31:49.927 21:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:49.927 21:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:49.927 21:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1517967 00:31:49.927 21:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:49.927 21:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:49.928 21:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1517967' 00:31:49.928 killing process with pid 1517967 00:31:49.928 21:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1517967 00:31:49.928 Received shutdown signal, test time was about 10.000000 seconds 00:31:49.928 00:31:49.928 Latency(us) 00:31:49.928 [2024-12-05T20:24:58.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.928 [2024-12-05T20:24:58.036Z] =================================================================================================================== 00:31:49.928 [2024-12-05T20:24:58.036Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:49.928 21:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1517967 00:31:49.928 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:49.928 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:49.928 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:49.928 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:49.928 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:49.928 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:49.928 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:49.928 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:49.928 rmmod nvme_tcp 00:31:50.187 rmmod nvme_fabrics 00:31:50.187 rmmod nvme_keyring 00:31:50.187 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:50.187 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:50.187 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:50.187 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1517806 ']' 00:31:50.187 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1517806 00:31:50.187 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1517806 ']' 00:31:50.187 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1517806 00:31:50.187 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:50.187 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:50.187 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1517806 00:31:50.187 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:50.187 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:50.187 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1517806' 00:31:50.187 killing process with pid 1517806 00:31:50.187 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1517806 00:31:50.187 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1517806 00:31:50.446 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:50.446 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:50.446 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:50.446 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:50.446 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:50.446 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:50.446 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:50.446 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:50.446 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:50.446 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.446 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:50.446 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.352 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:52.352 00:31:52.352 real 0m19.599s 00:31:52.352 user 0m22.646s 00:31:52.352 sys 0m6.262s 00:31:52.352 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.352 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:52.352 ************************************ 00:31:52.352 END TEST nvmf_queue_depth 00:31:52.352 ************************************ 00:31:52.352 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:52.352 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:52.352 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:52.352 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:52.611 ************************************ 00:31:52.611 START TEST nvmf_target_multipath 00:31:52.611 ************************************ 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:52.611 * Looking for test storage... 00:31:52.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:52.611 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:52.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.612 --rc genhtml_branch_coverage=1 00:31:52.612 --rc genhtml_function_coverage=1 00:31:52.612 --rc genhtml_legend=1 00:31:52.612 --rc geninfo_all_blocks=1 00:31:52.612 --rc geninfo_unexecuted_blocks=1 00:31:52.612 00:31:52.612 ' 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:52.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.612 --rc genhtml_branch_coverage=1 00:31:52.612 --rc genhtml_function_coverage=1 00:31:52.612 --rc genhtml_legend=1 00:31:52.612 --rc geninfo_all_blocks=1 00:31:52.612 --rc geninfo_unexecuted_blocks=1 00:31:52.612 00:31:52.612 ' 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:52.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.612 --rc genhtml_branch_coverage=1 00:31:52.612 --rc genhtml_function_coverage=1 00:31:52.612 --rc genhtml_legend=1 00:31:52.612 --rc geninfo_all_blocks=1 00:31:52.612 --rc geninfo_unexecuted_blocks=1 00:31:52.612 00:31:52.612 ' 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:52.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.612 --rc genhtml_branch_coverage=1 00:31:52.612 --rc genhtml_function_coverage=1 00:31:52.612 --rc genhtml_legend=1 00:31:52.612 --rc geninfo_all_blocks=1 00:31:52.612 --rc geninfo_unexecuted_blocks=1 00:31:52.612 00:31:52.612 ' 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:52.612 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:59.184 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:59.184 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:59.184 Found net devices under 0000:86:00.0: cvl_0_0 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:59.184 Found net devices under 0000:86:00.1: cvl_0_1 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:59.184 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:59.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:31:59.185 00:31:59.185 --- 10.0.0.2 ping statistics --- 00:31:59.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.185 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:31:59.185 00:31:59.185 --- 10.0.0.1 ping statistics --- 00:31:59.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.185 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:59.185 only one NIC for nvmf test 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:59.185 rmmod nvme_tcp 00:31:59.185 rmmod nvme_fabrics 00:31:59.185 rmmod nvme_keyring 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.185 21:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:01.087 00:32:01.087 real 0m8.272s 00:32:01.087 user 0m1.847s 00:32:01.087 sys 0m4.443s 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:01.087 ************************************ 00:32:01.087 END TEST nvmf_target_multipath 00:32:01.087 ************************************ 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:01.087 ************************************ 00:32:01.087 START TEST nvmf_zcopy 00:32:01.087 ************************************ 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:01.087 * Looking for test storage... 00:32:01.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:01.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.087 --rc genhtml_branch_coverage=1 00:32:01.087 --rc genhtml_function_coverage=1 00:32:01.087 --rc genhtml_legend=1 00:32:01.087 --rc geninfo_all_blocks=1 00:32:01.087 --rc geninfo_unexecuted_blocks=1 00:32:01.087 00:32:01.087 ' 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:01.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.087 --rc genhtml_branch_coverage=1 00:32:01.087 --rc genhtml_function_coverage=1 00:32:01.087 --rc genhtml_legend=1 00:32:01.087 --rc geninfo_all_blocks=1 00:32:01.087 --rc geninfo_unexecuted_blocks=1 00:32:01.087 00:32:01.087 ' 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:01.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.087 --rc genhtml_branch_coverage=1 00:32:01.087 --rc genhtml_function_coverage=1 00:32:01.087 --rc genhtml_legend=1 00:32:01.087 --rc geninfo_all_blocks=1 00:32:01.087 --rc geninfo_unexecuted_blocks=1 00:32:01.087 00:32:01.087 ' 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:01.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.087 --rc genhtml_branch_coverage=1 00:32:01.087 --rc genhtml_function_coverage=1 00:32:01.087 --rc genhtml_legend=1 00:32:01.087 --rc geninfo_all_blocks=1 00:32:01.087 --rc geninfo_unexecuted_blocks=1 00:32:01.087 00:32:01.087 ' 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:01.087 21:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:01.087 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:01.087 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:01.087 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:01.087 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:01.087 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:01.087 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:01.087 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:01.087 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:01.088 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:07.653 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:07.653 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:07.653 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:07.654 Found net devices under 0000:86:00.0: cvl_0_0 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:07.654 Found net devices under 0000:86:00.1: cvl_0_1 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:07.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:32:07.654 00:32:07.654 --- 10.0.0.2 ping statistics --- 00:32:07.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.654 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:32:07.654 00:32:07.654 --- 10.0.0.1 ping statistics --- 00:32:07.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.654 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1526615 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1526615 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1526615 ']' 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:07.654 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:07.654 [2024-12-05 21:25:14.974255] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:07.654 [2024-12-05 21:25:14.975165] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:32:07.654 [2024-12-05 21:25:14.975200] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.654 [2024-12-05 21:25:15.053898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.654 [2024-12-05 21:25:15.093949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:07.654 [2024-12-05 21:25:15.093981] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:07.654 [2024-12-05 21:25:15.093988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:07.654 [2024-12-05 21:25:15.093994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:07.654 [2024-12-05 21:25:15.093999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:07.654 [2024-12-05 21:25:15.094519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.654 [2024-12-05 21:25:15.160727] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:07.654 [2024-12-05 21:25:15.160947] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:07.654 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:07.654 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:32:07.654 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:07.654 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:07.654 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:07.654 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.654 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:07.654 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:07.654 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.654 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:07.654 [2024-12-05 21:25:15.227200] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.654 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:07.655 [2024-12-05 21:25:15.251365] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:07.655 malloc0 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:07.655 { 00:32:07.655 "params": { 00:32:07.655 "name": "Nvme$subsystem", 00:32:07.655 "trtype": "$TEST_TRANSPORT", 00:32:07.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.655 "adrfam": "ipv4", 00:32:07.655 "trsvcid": "$NVMF_PORT", 00:32:07.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.655 "hdgst": ${hdgst:-false}, 00:32:07.655 "ddgst": ${ddgst:-false} 00:32:07.655 }, 00:32:07.655 "method": "bdev_nvme_attach_controller" 00:32:07.655 } 00:32:07.655 EOF 00:32:07.655 )") 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:07.655 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:07.655 "params": { 00:32:07.655 "name": "Nvme1", 00:32:07.655 "trtype": "tcp", 00:32:07.655 "traddr": "10.0.0.2", 00:32:07.655 "adrfam": "ipv4", 00:32:07.655 "trsvcid": "4420", 00:32:07.655 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:07.655 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:07.655 "hdgst": false, 00:32:07.655 "ddgst": false 00:32:07.655 }, 00:32:07.655 "method": "bdev_nvme_attach_controller" 00:32:07.655 }' 00:32:07.655 [2024-12-05 21:25:15.344006] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:32:07.655 [2024-12-05 21:25:15.344047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526640 ] 00:32:07.655 [2024-12-05 21:25:15.419048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.655 [2024-12-05 21:25:15.459088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.655 Running I/O for 10 seconds... 00:32:09.969 8612.00 IOPS, 67.28 MiB/s [2024-12-05T20:25:19.014Z] 8631.50 IOPS, 67.43 MiB/s [2024-12-05T20:25:19.952Z] 8655.00 IOPS, 67.62 MiB/s [2024-12-05T20:25:20.889Z] 8659.25 IOPS, 67.65 MiB/s [2024-12-05T20:25:21.825Z] 8649.40 IOPS, 67.57 MiB/s [2024-12-05T20:25:22.762Z] 8663.83 IOPS, 67.69 MiB/s [2024-12-05T20:25:23.700Z] 8672.29 IOPS, 67.75 MiB/s [2024-12-05T20:25:25.076Z] 8675.25 IOPS, 67.78 MiB/s [2024-12-05T20:25:26.009Z] 8683.22 IOPS, 67.84 MiB/s [2024-12-05T20:25:26.009Z] 8682.00 IOPS, 67.83 MiB/s 00:32:17.901 Latency(us) 00:32:17.901 [2024-12-05T20:25:26.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.901 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:32:17.901 Verification LBA range: start 0x0 length 0x1000 00:32:17.901 Nvme1n1 : 10.01 8684.90 67.85 0.00 0.00 14696.68 2387.38 20846.69 00:32:17.901 [2024-12-05T20:25:26.009Z] =================================================================================================================== 00:32:17.901 [2024-12-05T20:25:26.009Z] Total : 8684.90 67.85 0.00 0.00 14696.68 2387.38 20846.69 00:32:17.901 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1528237 00:32:17.901 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:32:17.901 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:17.901 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:32:17.901 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:32:17.901 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:17.901 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:17.901 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:17.901 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:17.901 { 00:32:17.901 "params": { 00:32:17.901 "name": "Nvme$subsystem", 00:32:17.901 "trtype": "$TEST_TRANSPORT", 00:32:17.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:17.901 "adrfam": "ipv4", 00:32:17.901 "trsvcid": "$NVMF_PORT", 00:32:17.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:17.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:17.901 "hdgst": ${hdgst:-false}, 00:32:17.901 "ddgst": ${ddgst:-false} 00:32:17.901 }, 00:32:17.901 "method": "bdev_nvme_attach_controller" 00:32:17.901 } 00:32:17.901 EOF 00:32:17.901 )") 00:32:17.901 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:17.901 [2024-12-05 21:25:25.854886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.901 [2024-12-05 21:25:25.854916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.901 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:17.901 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:17.901 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:17.901 "params": { 00:32:17.901 "name": "Nvme1", 00:32:17.901 "trtype": "tcp", 00:32:17.901 "traddr": "10.0.0.2", 00:32:17.901 "adrfam": "ipv4", 00:32:17.901 "trsvcid": "4420", 00:32:17.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:17.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:17.901 "hdgst": false, 00:32:17.901 "ddgst": false 00:32:17.901 }, 00:32:17.901 "method": "bdev_nvme_attach_controller" 00:32:17.901 }' 00:32:17.901 [2024-12-05 21:25:25.866846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.901 [2024-12-05 21:25:25.866858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.901 [2024-12-05 21:25:25.878845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.901 [2024-12-05 21:25:25.878862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.901 [2024-12-05 21:25:25.890846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.901 [2024-12-05 21:25:25.890855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.901 [2024-12-05 21:25:25.892799] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:32:17.901 [2024-12-05 21:25:25.892839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528237 ] 00:32:17.901 [2024-12-05 21:25:25.902844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.901 [2024-12-05 21:25:25.902853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.901 [2024-12-05 21:25:25.914845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.901 [2024-12-05 21:25:25.914855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.901 [2024-12-05 21:25:25.926846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.901 [2024-12-05 21:25:25.926855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.901 [2024-12-05 21:25:25.938843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.901 [2024-12-05 21:25:25.938852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.901 [2024-12-05 21:25:25.950842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.901 [2024-12-05 21:25:25.950851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.901 [2024-12-05 21:25:25.962841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.901 [2024-12-05 21:25:25.962849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.901 [2024-12-05 21:25:25.964188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.901 [2024-12-05 21:25:25.974846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.901 [2024-12-05 21:25:25.974858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.901 [2024-12-05 21:25:25.986844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.901 [2024-12-05 21:25:25.986855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.901 [2024-12-05 21:25:25.998843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.901 [2024-12-05 21:25:25.998852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.901 [2024-12-05 21:25:26.005714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.160 [2024-12-05 21:25:26.010845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.010857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.022855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.022873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.034850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.034866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.046845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.046856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.058844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.058855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.070847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.070858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.082842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.082853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.094855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.094874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.106858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.106873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.118856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.118874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.130847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.130861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.142844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.142853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.154842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.154852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.166841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.166851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.178845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.178862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.190841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.190852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.202841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.202851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.214846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.214860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.226842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.226851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.238842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.238851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.250841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.250850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.160 [2024-12-05 21:25:26.262844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.160 [2024-12-05 21:25:26.262855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 [2024-12-05 21:25:26.274848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.274866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 Running I/O for 5 seconds... 00:32:18.419 [2024-12-05 21:25:26.288325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.288344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 [2024-12-05 21:25:26.303276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.303298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 [2024-12-05 21:25:26.318782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.318800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 [2024-12-05 21:25:26.333165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.333184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 [2024-12-05 21:25:26.347706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.347724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 [2024-12-05 21:25:26.358756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.358773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 [2024-12-05 21:25:26.373065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.373084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 [2024-12-05 21:25:26.388139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.388157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 [2024-12-05 21:25:26.402733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.402752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 [2024-12-05 21:25:26.416468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.416494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 [2024-12-05 21:25:26.431067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.431086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 [2024-12-05 21:25:26.442317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.442336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 [2024-12-05 21:25:26.456285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.456302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 [2024-12-05 21:25:26.470874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.470892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 [2024-12-05 21:25:26.483558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.483576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 [2024-12-05 21:25:26.496588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.496606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 [2024-12-05 21:25:26.511060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.511078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.419 [2024-12-05 21:25:26.523096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.419 [2024-12-05 21:25:26.523114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.677 [2024-12-05 21:25:26.536761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.677 [2024-12-05 21:25:26.536779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.677 [2024-12-05 21:25:26.551138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.677 [2024-12-05 21:25:26.551155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.677 [2024-12-05 21:25:26.567137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.677 [2024-12-05 21:25:26.567159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.677 [2024-12-05 21:25:26.580508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.677 [2024-12-05 21:25:26.580526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.677 [2024-12-05 21:25:26.594909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.677 [2024-12-05 21:25:26.594928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.677 [2024-12-05 21:25:26.607779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.677 [2024-12-05 21:25:26.607796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.677 [2024-12-05 21:25:26.622722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.677 [2024-12-05 21:25:26.622743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.677 [2024-12-05 21:25:26.635789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.677 [2024-12-05 21:25:26.635807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.677 [2024-12-05 21:25:26.650352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.677 [2024-12-05 21:25:26.650376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.677 [2024-12-05 21:25:26.664665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.677 [2024-12-05 21:25:26.664683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.677 [2024-12-05 21:25:26.679151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.677 [2024-12-05 21:25:26.679168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.677 [2024-12-05 21:25:26.694724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.677 [2024-12-05 21:25:26.694742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.677 [2024-12-05 21:25:26.708468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.678 [2024-12-05 21:25:26.708486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.678 [2024-12-05 21:25:26.722813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.678 [2024-12-05 21:25:26.722831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.678 [2024-12-05 21:25:26.736339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.678 [2024-12-05 21:25:26.736358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.678 [2024-12-05 21:25:26.750635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.678 [2024-12-05 21:25:26.750653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.678 [2024-12-05 21:25:26.763810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.678 [2024-12-05 21:25:26.763828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.678 [2024-12-05 21:25:26.778924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.678 [2024-12-05 21:25:26.778942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.936 [2024-12-05 21:25:26.791825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.936 [2024-12-05 21:25:26.791842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.936 [2024-12-05 21:25:26.807050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.936 [2024-12-05 21:25:26.807068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.936 [2024-12-05 21:25:26.819776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.936 [2024-12-05 21:25:26.819793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.936 [2024-12-05 21:25:26.835070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.936 [2024-12-05 21:25:26.835092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.936 [2024-12-05 21:25:26.846112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.936 [2024-12-05 21:25:26.846130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.936 [2024-12-05 21:25:26.860806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.936 [2024-12-05 21:25:26.860825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.936 [2024-12-05 21:25:26.875465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.936 [2024-12-05 21:25:26.875483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.936 [2024-12-05 21:25:26.890964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.936 [2024-12-05 21:25:26.890982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.936 [2024-12-05 21:25:26.904434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.936 [2024-12-05 21:25:26.904452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.936 [2024-12-05 21:25:26.919056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.936 [2024-12-05 21:25:26.919074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.936 [2024-12-05 21:25:26.933185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.936 [2024-12-05 21:25:26.933203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.936 [2024-12-05 21:25:26.947525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.936 [2024-12-05 21:25:26.947542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.936 [2024-12-05 21:25:26.963094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.936 [2024-12-05 21:25:26.963112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.936 [2024-12-05 21:25:26.976375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.936 [2024-12-05 21:25:26.976392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.936 [2024-12-05 21:25:26.991713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.936 [2024-12-05 21:25:26.991730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.936 [2024-12-05 21:25:27.006643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.936 [2024-12-05 21:25:27.006661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.936 [2024-12-05 21:25:27.020654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.936 [2024-12-05 21:25:27.020672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.936 [2024-12-05 21:25:27.034928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.936 [2024-12-05 21:25:27.034946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 [2024-12-05 21:25:27.047557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.047574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 [2024-12-05 21:25:27.062747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.062771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 [2024-12-05 21:25:27.075220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.075238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 [2024-12-05 21:25:27.088435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.088453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 [2024-12-05 21:25:27.103155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.103172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 [2024-12-05 21:25:27.118294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.118313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 [2024-12-05 21:25:27.132463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.132482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 [2024-12-05 21:25:27.146943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.146962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 [2024-12-05 21:25:27.159904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.159923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 [2024-12-05 21:25:27.175158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.175175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 [2024-12-05 21:25:27.191132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.191150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 [2024-12-05 21:25:27.204261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.204280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 [2024-12-05 21:25:27.215390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.215408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 [2024-12-05 21:25:27.228659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.228678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 [2024-12-05 21:25:27.243109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.243128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 [2024-12-05 21:25:27.255623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.255640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 [2024-12-05 21:25:27.270177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.270195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 [2024-12-05 21:25:27.284871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.284889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.195 16981.00 IOPS, 132.66 MiB/s [2024-12-05T20:25:27.303Z] [2024-12-05 21:25:27.299463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.195 [2024-12-05 21:25:27.299481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.453 [2024-12-05 21:25:27.314557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.453 [2024-12-05 21:25:27.314574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.453 [2024-12-05 21:25:27.327971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.453 [2024-12-05 21:25:27.327989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.453 [2024-12-05 21:25:27.342283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.453 [2024-12-05 21:25:27.342301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.453 [2024-12-05 21:25:27.356226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.453 [2024-12-05 21:25:27.356248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.453 [2024-12-05 21:25:27.370680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.453 [2024-12-05 21:25:27.370698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.453 [2024-12-05 21:25:27.384197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.453 [2024-12-05 21:25:27.384216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.453 [2024-12-05 21:25:27.399062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.453 [2024-12-05 21:25:27.399080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.453 [2024-12-05 21:25:27.412968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.453 [2024-12-05 21:25:27.412986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.453 [2024-12-05 21:25:27.427185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.453 [2024-12-05 21:25:27.427202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.453 [2024-12-05 21:25:27.442491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.453 [2024-12-05 21:25:27.442509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.453 [2024-12-05 21:25:27.455359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.453 [2024-12-05 21:25:27.455383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.453 [2024-12-05 21:25:27.470258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.453 [2024-12-05 21:25:27.470276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.453 [2024-12-05 21:25:27.484439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.453 [2024-12-05 21:25:27.484457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.453 [2024-12-05 21:25:27.499132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.453 [2024-12-05 21:25:27.499149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.453 [2024-12-05 21:25:27.514217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.453 [2024-12-05 21:25:27.514235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.453 [2024-12-05 21:25:27.528605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.453 [2024-12-05 21:25:27.528624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.453 [2024-12-05 21:25:27.543497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.453 [2024-12-05 21:25:27.543515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.453 [2024-12-05 21:25:27.558455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.453 [2024-12-05 21:25:27.558473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-05 21:25:27.572555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-05 21:25:27.572573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-05 21:25:27.587304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-05 21:25:27.587323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-05 21:25:27.598103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-05 21:25:27.598121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-05 21:25:27.612740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-05 21:25:27.612759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-05 21:25:27.627563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-05 21:25:27.627583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-05 21:25:27.642215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-05 21:25:27.642235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-05 21:25:27.656210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-05 21:25:27.656231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-05 21:25:27.671038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-05 21:25:27.671056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-05 21:25:27.682705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-05 21:25:27.682723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-05 21:25:27.696992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-05 21:25:27.697011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-05 21:25:27.711560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-05 21:25:27.711578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-05 21:25:27.723592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-05 21:25:27.723611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-05 21:25:27.738827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-05 21:25:27.738845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-05 21:25:27.751694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-05 21:25:27.751711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-05 21:25:27.766883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-05 21:25:27.766902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-05 21:25:27.779487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-05 21:25:27.779505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-05 21:25:27.794532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-05 21:25:27.794551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-05 21:25:27.809310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-05 21:25:27.809328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-05 21:25:27.823533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-05 21:25:27.823551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-05 21:25:27.838255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-05 21:25:27.838273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-05 21:25:27.852278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-05 21:25:27.852296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-05 21:25:27.867140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-05 21:25:27.867158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-05 21:25:27.882325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-05 21:25:27.882344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-05 21:25:27.896178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-05 21:25:27.896201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-05 21:25:27.910867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-05 21:25:27.910885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-05 21:25:27.924118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-05 21:25:27.924136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-05 21:25:27.935223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-05 21:25:27.935240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-05 21:25:27.948545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-05 21:25:27.948564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-05 21:25:27.963729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-05 21:25:27.963748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-05 21:25:27.979301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-05 21:25:27.979319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-05 21:25:27.994584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-05 21:25:27.994603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-05 21:25:28.007523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-05 21:25:28.007542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-05 21:25:28.022851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-05 21:25:28.022869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-05 21:25:28.034772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-05 21:25:28.034791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-05 21:25:28.048624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.971 [2024-12-05 21:25:28.048643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.971 [2024-12-05 21:25:28.063621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.971 [2024-12-05 21:25:28.063639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.230 [2024-12-05 21:25:28.078358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.230 [2024-12-05 21:25:28.078385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.230 [2024-12-05 21:25:28.091735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.230 [2024-12-05 21:25:28.091753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.230 [2024-12-05 21:25:28.106657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.230 [2024-12-05 21:25:28.106675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.230 [2024-12-05 21:25:28.117940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.230 [2024-12-05 21:25:28.117958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.230 [2024-12-05 21:25:28.133093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.230 [2024-12-05 21:25:28.133110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.230 [2024-12-05 21:25:28.147597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.230 [2024-12-05 21:25:28.147615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.230 [2024-12-05 21:25:28.163386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.230 [2024-12-05 21:25:28.163409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.230 [2024-12-05 21:25:28.178047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.230 [2024-12-05 21:25:28.178065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.230 [2024-12-05 21:25:28.192519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.230 [2024-12-05 21:25:28.192536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.230 [2024-12-05 21:25:28.206895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.231 [2024-12-05 21:25:28.206915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.231 [2024-12-05 21:25:28.218086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.231 [2024-12-05 21:25:28.218106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.231 [2024-12-05 21:25:28.232857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.231 [2024-12-05 21:25:28.232877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.231 [2024-12-05 21:25:28.247514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.231 [2024-12-05 21:25:28.247533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.231 [2024-12-05 21:25:28.259642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.231 [2024-12-05 21:25:28.259660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.231 [2024-12-05 21:25:28.271791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.231 [2024-12-05 21:25:28.271809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.231 [2024-12-05 21:25:28.286541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.231 [2024-12-05 21:25:28.286559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.231 16933.50 IOPS, 132.29 MiB/s [2024-12-05T20:25:28.339Z] [2024-12-05 21:25:28.300213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.231 [2024-12-05 21:25:28.300231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.231 [2024-12-05 21:25:28.314720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.231 [2024-12-05 21:25:28.314738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.231 [2024-12-05 21:25:28.327904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.231 [2024-12-05 21:25:28.327921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.490 [2024-12-05 21:25:28.342817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.490 [2024-12-05 21:25:28.342835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.490 [2024-12-05 21:25:28.354703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.490 [2024-12-05 21:25:28.354721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.490 [2024-12-05 21:25:28.368832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.490 [2024-12-05 21:25:28.368849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.490 [2024-12-05 21:25:28.383534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.490 [2024-12-05 21:25:28.383553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.490 [2024-12-05 21:25:28.399413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.490 [2024-12-05 21:25:28.399431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.490 [2024-12-05 21:25:28.414619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.490 [2024-12-05 21:25:28.414638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.490 [2024-12-05 21:25:28.428671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.490 [2024-12-05 21:25:28.428693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.490 [2024-12-05 21:25:28.442895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.490 [2024-12-05 21:25:28.442913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.490 [2024-12-05 21:25:28.453827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.490 [2024-12-05 21:25:28.453845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.490 [2024-12-05 21:25:28.468304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.490 [2024-12-05 21:25:28.468321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.490 [2024-12-05 21:25:28.483157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.490 [2024-12-05 21:25:28.483174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.490 [2024-12-05 21:25:28.498169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.490 [2024-12-05 21:25:28.498188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.490 [2024-12-05 21:25:28.511880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.490 [2024-12-05 21:25:28.511897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.490 [2024-12-05 21:25:28.526567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.491 [2024-12-05 21:25:28.526584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.491 [2024-12-05 21:25:28.540463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.491 [2024-12-05 21:25:28.540481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.491 [2024-12-05 21:25:28.554840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.491 [2024-12-05 21:25:28.554858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.491 [2024-12-05 21:25:28.567573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.491 [2024-12-05 21:25:28.567591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.491 [2024-12-05 21:25:28.582362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.491 [2024-12-05 21:25:28.582389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.491 [2024-12-05 21:25:28.596645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.491 [2024-12-05 21:25:28.596663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.750 [2024-12-05 21:25:28.611175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.750 [2024-12-05 21:25:28.611193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.750 [2024-12-05 21:25:28.626324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.750 [2024-12-05 21:25:28.626342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.750 [2024-12-05 21:25:28.640652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.750 [2024-12-05 21:25:28.640670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.750 [2024-12-05 21:25:28.654760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.750 [2024-12-05 21:25:28.654778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.750 [2024-12-05 21:25:28.667864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.750 [2024-12-05 21:25:28.667882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.750 [2024-12-05 21:25:28.682708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.750 [2024-12-05 21:25:28.682728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.750 [2024-12-05 21:25:28.696252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.750 [2024-12-05 21:25:28.696270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.750 [2024-12-05 21:25:28.710818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.750 [2024-12-05 21:25:28.710836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.750 [2024-12-05 21:25:28.723573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.750 [2024-12-05 21:25:28.723590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.750 [2024-12-05 21:25:28.738755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.750 [2024-12-05 21:25:28.738773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.750 [2024-12-05 21:25:28.753151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.750 [2024-12-05 21:25:28.753173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.750 [2024-12-05 21:25:28.767718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.750 [2024-12-05 21:25:28.767735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.750 [2024-12-05 21:25:28.783050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.750 [2024-12-05 21:25:28.783068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.750 [2024-12-05 21:25:28.796308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.750 [2024-12-05 21:25:28.796326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.750 [2024-12-05 21:25:28.810998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.750 [2024-12-05 21:25:28.811019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.750 [2024-12-05 21:25:28.823725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.750 [2024-12-05 21:25:28.823742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.750 [2024-12-05 21:25:28.838201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.750 [2024-12-05 21:25:28.838218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.750 [2024-12-05 21:25:28.852793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.750 [2024-12-05 21:25:28.852811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.010 [2024-12-05 21:25:28.867288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.010 [2024-12-05 21:25:28.867305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.010 [2024-12-05 21:25:28.882996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.010 [2024-12-05 21:25:28.883014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.010 [2024-12-05 21:25:28.896383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.010 [2024-12-05 21:25:28.896401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.010 [2024-12-05 21:25:28.911314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.010 [2024-12-05 21:25:28.911332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.010 [2024-12-05 21:25:28.927001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.010 [2024-12-05 21:25:28.927019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.010 [2024-12-05 21:25:28.940872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.010 [2024-12-05 21:25:28.940889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.010 [2024-12-05 21:25:28.955329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.010 [2024-12-05 21:25:28.955346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.010 [2024-12-05 21:25:28.970818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.010 [2024-12-05 21:25:28.970836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.010 [2024-12-05 21:25:28.984246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.010 [2024-12-05 21:25:28.984264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.010 [2024-12-05 21:25:28.998932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.010 [2024-12-05 21:25:28.998950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.010 [2024-12-05 21:25:29.012294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.010 [2024-12-05 21:25:29.012312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.010 [2024-12-05 21:25:29.026702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.010 [2024-12-05 21:25:29.026720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.010 [2024-12-05 21:25:29.040409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.010 [2024-12-05 21:25:29.040428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.010 [2024-12-05 21:25:29.054793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.010 [2024-12-05 21:25:29.054811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.010 [2024-12-05 21:25:29.068105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.010 [2024-12-05 21:25:29.068124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.010 [2024-12-05 21:25:29.082495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.010 [2024-12-05 21:25:29.082515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.010 [2024-12-05 21:25:29.093707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.010 [2024-12-05 21:25:29.093726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.010 [2024-12-05 21:25:29.108514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.010 [2024-12-05 21:25:29.108534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 [2024-12-05 21:25:29.123285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.269 [2024-12-05 21:25:29.123304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 [2024-12-05 21:25:29.138569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.269 [2024-12-05 21:25:29.138587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 [2024-12-05 21:25:29.152779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.269 [2024-12-05 21:25:29.152798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 [2024-12-05 21:25:29.167312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.269 [2024-12-05 21:25:29.167329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 [2024-12-05 21:25:29.182627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.269 [2024-12-05 21:25:29.182647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 [2024-12-05 21:25:29.196646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.269 [2024-12-05 21:25:29.196664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 [2024-12-05 21:25:29.211339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.269 [2024-12-05 21:25:29.211357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 [2024-12-05 21:25:29.223570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.269 [2024-12-05 21:25:29.223589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 [2024-12-05 21:25:29.239026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.269 [2024-12-05 21:25:29.239045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 [2024-12-05 21:25:29.251791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.269 [2024-12-05 21:25:29.251809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 [2024-12-05 21:25:29.264541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.269 [2024-12-05 21:25:29.264560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 [2024-12-05 21:25:29.279500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.269 [2024-12-05 21:25:29.279519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 16957.33 IOPS, 132.48 MiB/s [2024-12-05T20:25:29.377Z] [2024-12-05 21:25:29.294853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.269 [2024-12-05 21:25:29.294875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 [2024-12-05 21:25:29.308381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.269 [2024-12-05 21:25:29.308400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 [2024-12-05 21:25:29.322832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.269 [2024-12-05 21:25:29.322851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 [2024-12-05 21:25:29.335625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.269 [2024-12-05 21:25:29.335645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 [2024-12-05 21:25:29.348136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.269 [2024-12-05 21:25:29.348154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 [2024-12-05 21:25:29.359441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.269 [2024-12-05 21:25:29.359459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.269 [2024-12-05 21:25:29.375043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.270 [2024-12-05 21:25:29.375061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.529 [2024-12-05 21:25:29.387248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.529 [2024-12-05 21:25:29.387266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.529 [2024-12-05 21:25:29.401348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.529 [2024-12-05 21:25:29.401366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.529 [2024-12-05 21:25:29.415829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.529 [2024-12-05 21:25:29.415846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.529 [2024-12-05 21:25:29.430271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.529 [2024-12-05 21:25:29.430290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.529 [2024-12-05 21:25:29.445052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.529 [2024-12-05 21:25:29.445070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.529 [2024-12-05 21:25:29.459559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.529 [2024-12-05 21:25:29.459577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.529 [2024-12-05 21:25:29.474115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.529 [2024-12-05 21:25:29.474133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.529 [2024-12-05 21:25:29.487538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.529 [2024-12-05 21:25:29.487565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.529 [2024-12-05 21:25:29.502894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.529 [2024-12-05 21:25:29.502913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.529 [2024-12-05 21:25:29.515588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.529 [2024-12-05 21:25:29.515607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.529 [2024-12-05 21:25:29.531248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.529 [2024-12-05 21:25:29.531266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.529 [2024-12-05 21:25:29.546429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.529 [2024-12-05 21:25:29.546447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.529 [2024-12-05 21:25:29.560697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.529 [2024-12-05 21:25:29.560715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.529 [2024-12-05 21:25:29.575179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.529 [2024-12-05 21:25:29.575196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.529 [2024-12-05 21:25:29.590918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.529 [2024-12-05 21:25:29.590936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.529 [2024-12-05 21:25:29.604425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.529 [2024-12-05 21:25:29.604446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.529 [2024-12-05 21:25:29.618543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.529 [2024-12-05 21:25:29.618561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.529 [2024-12-05 21:25:29.631071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.529 [2024-12-05 21:25:29.631088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.788 [2024-12-05 21:25:29.644481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.788 [2024-12-05 21:25:29.644499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.788 [2024-12-05 21:25:29.659305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.788 [2024-12-05 21:25:29.659322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.788 [2024-12-05 21:25:29.674904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.788 [2024-12-05 21:25:29.674923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.788 [2024-12-05 21:25:29.688576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.788 [2024-12-05 21:25:29.688594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.788 [2024-12-05 21:25:29.703635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.788 [2024-12-05 21:25:29.703653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.789 [2024-12-05 21:25:29.718496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.789 [2024-12-05 21:25:29.718515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.789 [2024-12-05 21:25:29.732352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.789 [2024-12-05 21:25:29.732376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.789 [2024-12-05 21:25:29.747749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.789 [2024-12-05 21:25:29.747767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.789 [2024-12-05 21:25:29.763454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.789 [2024-12-05 21:25:29.763476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.789 [2024-12-05 21:25:29.778041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.789 [2024-12-05 21:25:29.778060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.789 [2024-12-05 21:25:29.792684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.789 [2024-12-05 21:25:29.792703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.789 [2024-12-05 21:25:29.807170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.789 [2024-12-05 21:25:29.807188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.789 [2024-12-05 21:25:29.823721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.789 [2024-12-05 21:25:29.823739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.789 [2024-12-05 21:25:29.838833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.789 [2024-12-05 21:25:29.838851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.789 [2024-12-05 21:25:29.851543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.789 [2024-12-05 21:25:29.851561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.789 [2024-12-05 21:25:29.867007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.789 [2024-12-05 21:25:29.867025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.789 [2024-12-05 21:25:29.880707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.789 [2024-12-05 21:25:29.880725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.048 [2024-12-05 21:25:29.895463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.048 [2024-12-05 21:25:29.895482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.048 [2024-12-05 21:25:29.909928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.048 [2024-12-05 21:25:29.909946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.048 [2024-12-05 21:25:29.922663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.048 [2024-12-05 21:25:29.922682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.048 [2024-12-05 21:25:29.936965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.048 [2024-12-05 21:25:29.936983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.048 [2024-12-05 21:25:29.951386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.048 [2024-12-05 21:25:29.951403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.048 [2024-12-05 21:25:29.966940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.048 [2024-12-05 21:25:29.966959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.048 [2024-12-05 21:25:29.977827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.048 [2024-12-05 21:25:29.977845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.048 [2024-12-05 21:25:29.992193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.048 [2024-12-05 21:25:29.992211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.048 [2024-12-05 21:25:30.007742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.048 [2024-12-05 21:25:30.007761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.048 [2024-12-05 21:25:30.022995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.048 [2024-12-05 21:25:30.023014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.048 [2024-12-05 21:25:30.034598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.048 [2024-12-05 21:25:30.034622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.048 [2024-12-05 21:25:30.048991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.048 [2024-12-05 21:25:30.049010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.048 [2024-12-05 21:25:30.063940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.048 [2024-12-05 21:25:30.063959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.048 [2024-12-05 21:25:30.080244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.048 [2024-12-05 21:25:30.080264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.048 [2024-12-05 21:25:30.095179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.048 [2024-12-05 21:25:30.095196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.048 [2024-12-05 21:25:30.110833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.048 [2024-12-05 21:25:30.110851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.048 [2024-12-05 21:25:30.123998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.048 [2024-12-05 21:25:30.124015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.048 [2024-12-05 21:25:30.139129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.048 [2024-12-05 21:25:30.139146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.307 [2024-12-05 21:25:30.154709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.307 [2024-12-05 21:25:30.154728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.307 [2024-12-05 21:25:30.168506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.307 [2024-12-05 21:25:30.168524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.307 [2024-12-05 21:25:30.183345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.307 [2024-12-05 21:25:30.183362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.307 [2024-12-05 21:25:30.198672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.307 [2024-12-05 21:25:30.198690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.307 [2024-12-05 21:25:30.211586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.307 [2024-12-05 21:25:30.211604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.307 [2024-12-05 21:25:30.226683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.307 [2024-12-05 21:25:30.226702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.307 [2024-12-05 21:25:30.238300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.307 [2024-12-05 21:25:30.238318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.307 [2024-12-05 21:25:30.252675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.307 [2024-12-05 21:25:30.252694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.307 [2024-12-05 21:25:30.267642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.307 [2024-12-05 21:25:30.267660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.307 [2024-12-05 21:25:30.282423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.307 [2024-12-05 21:25:30.282442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.307 16895.25 IOPS, 131.99 MiB/s [2024-12-05T20:25:30.415Z] [2024-12-05 21:25:30.295342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.307 [2024-12-05 21:25:30.295359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.308 [2024-12-05 21:25:30.310964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.308 [2024-12-05 21:25:30.310982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.308 [2024-12-05 21:25:30.324669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.308 [2024-12-05 21:25:30.324688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.308 [2024-12-05 21:25:30.339104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.308 [2024-12-05 21:25:30.339122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.308 [2024-12-05 21:25:30.350277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.308 [2024-12-05 21:25:30.350295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.308 [2024-12-05 21:25:30.364977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.308 [2024-12-05 21:25:30.364995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.308 [2024-12-05 21:25:30.379543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.308 [2024-12-05 21:25:30.379560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.308 [2024-12-05 21:25:30.394128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.308 [2024-12-05 21:25:30.394145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.308 [2024-12-05 21:25:30.407743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.308 [2024-12-05 21:25:30.407760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.567 [2024-12-05 21:25:30.423392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.568 [2024-12-05 21:25:30.423409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.568 [2024-12-05 21:25:30.439035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.568 [2024-12-05 21:25:30.439053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.568 [2024-12-05 21:25:30.451565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.568 [2024-12-05 21:25:30.451584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.568 [2024-12-05 21:25:30.466876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.568 [2024-12-05 21:25:30.466893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.568 [2024-12-05 21:25:30.480971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.568 [2024-12-05 21:25:30.480990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.568 [2024-12-05 21:25:30.495743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.568 [2024-12-05 21:25:30.495761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.568 [2024-12-05 21:25:30.511269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.568 [2024-12-05 21:25:30.511286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.568 [2024-12-05 21:25:30.526713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.568 [2024-12-05 21:25:30.526731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.568 [2024-12-05 21:25:30.540890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.568 [2024-12-05 21:25:30.540910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.568 [2024-12-05 21:25:30.555012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.568 [2024-12-05 21:25:30.555030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.568 [2024-12-05 21:25:30.567687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.568 [2024-12-05 21:25:30.567707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.568 [2024-12-05 21:25:30.580441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.568 [2024-12-05 21:25:30.580461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.568 [2024-12-05 21:25:30.595014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.568 [2024-12-05 21:25:30.595033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.568 [2024-12-05 21:25:30.605518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.568 [2024-12-05 21:25:30.605537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.568 [2024-12-05 21:25:30.620014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.568 [2024-12-05 21:25:30.620031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.568 [2024-12-05 21:25:30.634927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.568 [2024-12-05 21:25:30.634945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.568 [2024-12-05 21:25:30.646213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.568 [2024-12-05 21:25:30.646231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.568 [2024-12-05 21:25:30.660120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.568 [2024-12-05 21:25:30.660138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.828 [2024-12-05 21:25:30.674981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.828 [2024-12-05 21:25:30.674999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.828 [2024-12-05 21:25:30.685517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.828 [2024-12-05 21:25:30.685535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.828 [2024-12-05 21:25:30.700483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.828 [2024-12-05 21:25:30.700501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.828 [2024-12-05 21:25:30.715304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.828 [2024-12-05 21:25:30.715321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.828 [2024-12-05 21:25:30.731250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.828 [2024-12-05 21:25:30.731267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.828 [2024-12-05 21:25:30.747292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.828 [2024-12-05 21:25:30.747310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.828 [2024-12-05 21:25:30.762749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.828 [2024-12-05 21:25:30.762767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.828 [2024-12-05 21:25:30.773974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.828 [2024-12-05 21:25:30.773992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.828 [2024-12-05 21:25:30.788507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.828 [2024-12-05 21:25:30.788526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.828 [2024-12-05 21:25:30.802937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.828 [2024-12-05 21:25:30.802960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.828 [2024-12-05 21:25:30.816391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.828 [2024-12-05 21:25:30.816409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.828 [2024-12-05 21:25:30.831221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.828 [2024-12-05 21:25:30.831239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.828 [2024-12-05 21:25:30.846670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.828 [2024-12-05 21:25:30.846688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.828 [2024-12-05 21:25:30.860217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.828 [2024-12-05 21:25:30.860235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.828 [2024-12-05 21:25:30.874522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.828 [2024-12-05 21:25:30.874540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.828 [2024-12-05 21:25:30.887127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.828 [2024-12-05 21:25:30.887145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.828 [2024-12-05 21:25:30.900483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.828 [2024-12-05 21:25:30.900501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.829 [2024-12-05 21:25:30.915052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.829 [2024-12-05 21:25:30.915070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.829 [2024-12-05 21:25:30.927936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.829 [2024-12-05 21:25:30.927954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.088 [2024-12-05 21:25:30.942859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.088 [2024-12-05 21:25:30.942878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.088 [2024-12-05 21:25:30.956006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.088 [2024-12-05 21:25:30.956024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.088 [2024-12-05 21:25:30.966919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.088 [2024-12-05 21:25:30.966938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.088 [2024-12-05 21:25:30.980937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.088 [2024-12-05 21:25:30.980954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.088 [2024-12-05 21:25:30.995381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.088 [2024-12-05 21:25:30.995399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.088 [2024-12-05 21:25:31.010531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.088 [2024-12-05 21:25:31.010550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.088 [2024-12-05 21:25:31.024789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.088 [2024-12-05 21:25:31.024807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.088 [2024-12-05 21:25:31.039579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.088 [2024-12-05 21:25:31.039596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.088 [2024-12-05 21:25:31.054928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.088 [2024-12-05 21:25:31.054946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.088 [2024-12-05 21:25:31.067439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.088 [2024-12-05 21:25:31.067456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.088 [2024-12-05 21:25:31.080717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.088 [2024-12-05 21:25:31.080735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.088 [2024-12-05 21:25:31.095526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.088 [2024-12-05 21:25:31.095548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.088 [2024-12-05 21:25:31.111040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.088 [2024-12-05 21:25:31.111058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.088 [2024-12-05 21:25:31.124477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.088 [2024-12-05 21:25:31.124495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.088 [2024-12-05 21:25:31.139173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.088 [2024-12-05 21:25:31.139189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.088 [2024-12-05 21:25:31.155258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.088 [2024-12-05 21:25:31.155276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.088 [2024-12-05 21:25:31.171167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.088 [2024-12-05 21:25:31.171184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.088 [2024-12-05 21:25:31.186484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.089 [2024-12-05 21:25:31.186502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.348 [2024-12-05 21:25:31.200870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.348 [2024-12-05 21:25:31.200888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.348 [2024-12-05 21:25:31.215506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.348 [2024-12-05 21:25:31.215523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.348 [2024-12-05 21:25:31.231092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.348 [2024-12-05 21:25:31.231110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.348 [2024-12-05 21:25:31.244415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.348 [2024-12-05 21:25:31.244433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.348 [2024-12-05 21:25:31.258977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.348 [2024-12-05 21:25:31.258994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.348 [2024-12-05 21:25:31.272150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.348 [2024-12-05 21:25:31.272168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.348 [2024-12-05 21:25:31.286734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.348 [2024-12-05 21:25:31.286752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.348 16890.80 IOPS, 131.96 MiB/s [2024-12-05T20:25:31.456Z] [2024-12-05 21:25:31.299893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.348 [2024-12-05 21:25:31.299912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.348 00:32:23.348 Latency(us) 00:32:23.348 [2024-12-05T20:25:31.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.348 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:23.348 Nvme1n1 : 5.01 16890.79 131.96 0.00 0.00 7570.54 1997.29 13294.45 00:32:23.348 [2024-12-05T20:25:31.456Z] =================================================================================================================== 00:32:23.348 [2024-12-05T20:25:31.456Z] Total : 16890.79 131.96 0.00 0.00 7570.54 1997.29 13294.45 00:32:23.348 [2024-12-05 21:25:31.310847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.348 [2024-12-05 21:25:31.310865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.348 [2024-12-05 21:25:31.322848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.348 [2024-12-05 21:25:31.322866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.348 [2024-12-05 21:25:31.334861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.348 [2024-12-05 21:25:31.334879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.348 [2024-12-05 21:25:31.346852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.348 [2024-12-05 21:25:31.346865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.348 [2024-12-05 21:25:31.358850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.348 [2024-12-05 21:25:31.358863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.348 [2024-12-05 21:25:31.370845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.348 [2024-12-05 21:25:31.370859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.349 [2024-12-05 21:25:31.382845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.349 [2024-12-05 21:25:31.382857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.349 [2024-12-05 21:25:31.394846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.349 [2024-12-05 21:25:31.394859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.349 [2024-12-05 21:25:31.406847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.349 [2024-12-05 21:25:31.406859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.349 [2024-12-05 21:25:31.418843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.349 [2024-12-05 21:25:31.418852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.349 [2024-12-05 21:25:31.430847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.349 [2024-12-05 21:25:31.430857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.349 [2024-12-05 21:25:31.442846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.349 [2024-12-05 21:25:31.442856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.349 [2024-12-05 21:25:31.454845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.349 [2024-12-05 21:25:31.454855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1528237) - No such process 00:32:23.608 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1528237 00:32:23.608 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:23.608 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.608 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:23.608 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.608 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:23.608 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.608 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:23.608 delay0 00:32:23.608 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.608 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:23.608 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.608 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:23.608 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.608 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:23.608 [2024-12-05 21:25:31.602953] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:30.180 Initializing NVMe Controllers 00:32:30.180 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:30.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:30.180 Initialization complete. Launching workers. 00:32:30.180 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 293, failed: 10647 00:32:30.180 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 10878, failed to submit 62 00:32:30.180 success 10776, unsuccessful 102, failed 0 00:32:30.180 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:30.180 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:30.180 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:30.180 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:30.180 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:30.180 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:30.180 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:30.180 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:30.180 rmmod nvme_tcp 00:32:30.180 rmmod nvme_fabrics 00:32:30.180 rmmod nvme_keyring 00:32:30.180 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:30.180 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:30.180 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:30.180 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1526615 ']' 00:32:30.180 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1526615 00:32:30.180 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1526615 ']' 00:32:30.180 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1526615 00:32:30.180 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:32:30.180 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:30.180 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1526615 00:32:30.439 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:30.439 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:30.439 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1526615' 00:32:30.439 killing process with pid 1526615 00:32:30.439 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1526615 00:32:30.439 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1526615 00:32:30.439 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:30.439 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:30.439 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:30.439 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:30.439 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:32:30.439 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:30.439 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:32:30.439 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:30.439 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:30.439 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.439 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:30.439 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:32.975 00:32:32.975 real 0m31.751s 00:32:32.975 user 0m41.090s 00:32:32.975 sys 0m12.607s 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:32.975 ************************************ 00:32:32.975 END TEST nvmf_zcopy 00:32:32.975 ************************************ 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:32.975 ************************************ 00:32:32.975 START TEST nvmf_nmic 00:32:32.975 ************************************ 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:32.975 * Looking for test storage... 00:32:32.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:32.975 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:32.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.976 --rc genhtml_branch_coverage=1 00:32:32.976 --rc genhtml_function_coverage=1 00:32:32.976 --rc genhtml_legend=1 00:32:32.976 --rc geninfo_all_blocks=1 00:32:32.976 --rc geninfo_unexecuted_blocks=1 00:32:32.976 00:32:32.976 ' 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:32.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.976 --rc genhtml_branch_coverage=1 00:32:32.976 --rc genhtml_function_coverage=1 00:32:32.976 --rc genhtml_legend=1 00:32:32.976 --rc geninfo_all_blocks=1 00:32:32.976 --rc geninfo_unexecuted_blocks=1 00:32:32.976 00:32:32.976 ' 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:32.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.976 --rc genhtml_branch_coverage=1 00:32:32.976 --rc genhtml_function_coverage=1 00:32:32.976 --rc genhtml_legend=1 00:32:32.976 --rc geninfo_all_blocks=1 00:32:32.976 --rc geninfo_unexecuted_blocks=1 00:32:32.976 00:32:32.976 ' 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:32.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.976 --rc genhtml_branch_coverage=1 00:32:32.976 --rc genhtml_function_coverage=1 00:32:32.976 --rc genhtml_legend=1 00:32:32.976 --rc geninfo_all_blocks=1 00:32:32.976 --rc geninfo_unexecuted_blocks=1 00:32:32.976 00:32:32.976 ' 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:32.976 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.558 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:39.558 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:39.558 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:39.558 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:39.558 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:39.558 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:39.558 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:39.559 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:39.559 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:39.559 Found net devices under 0000:86:00.0: cvl_0_0 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:39.559 Found net devices under 0000:86:00.1: cvl_0_1 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:39.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:39.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:32:39.559 00:32:39.559 --- 10.0.0.2 ping statistics --- 00:32:39.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.559 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:32:39.559 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:39.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:39.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:32:39.559 00:32:39.560 --- 10.0.0.1 ping statistics --- 00:32:39.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.560 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1533663 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1533663 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1533663 ']' 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:39.560 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.560 [2024-12-05 21:25:46.805620] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:39.560 [2024-12-05 21:25:46.806612] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:32:39.560 [2024-12-05 21:25:46.806656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:39.560 [2024-12-05 21:25:46.888821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:39.560 [2024-12-05 21:25:46.932738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:39.560 [2024-12-05 21:25:46.932774] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:39.560 [2024-12-05 21:25:46.932781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:39.560 [2024-12-05 21:25:46.932786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:39.560 [2024-12-05 21:25:46.932791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:39.560 [2024-12-05 21:25:46.934214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.560 [2024-12-05 21:25:46.934248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:39.560 [2024-12-05 21:25:46.934354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.560 [2024-12-05 21:25:46.934354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:39.560 [2024-12-05 21:25:47.002965] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:39.560 [2024-12-05 21:25:47.003092] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:39.560 [2024-12-05 21:25:47.003726] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:39.560 [2024-12-05 21:25:47.003944] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:39.560 [2024-12-05 21:25:47.003993] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.560 [2024-12-05 21:25:47.071213] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.560 Malloc0 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.560 [2024-12-05 21:25:47.151449] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:39.560 test case1: single bdev can't be used in multiple subsystems 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.560 [2024-12-05 21:25:47.182910] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:39.560 [2024-12-05 21:25:47.182930] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:39.560 [2024-12-05 21:25:47.182937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.560 request: 00:32:39.560 { 00:32:39.560 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:39.560 "namespace": { 00:32:39.560 "bdev_name": "Malloc0", 00:32:39.560 "no_auto_visible": false, 00:32:39.560 "hide_metadata": false 00:32:39.560 }, 00:32:39.560 "method": "nvmf_subsystem_add_ns", 00:32:39.560 "req_id": 1 00:32:39.560 } 00:32:39.560 Got JSON-RPC error response 00:32:39.560 response: 00:32:39.560 { 00:32:39.560 "code": -32602, 00:32:39.560 "message": "Invalid parameters" 00:32:39.560 } 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:39.560 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:39.560 Adding namespace failed - expected result. 00:32:39.561 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:39.561 test case2: host connect to nvmf target in multiple paths 00:32:39.561 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:39.561 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.561 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.561 [2024-12-05 21:25:47.194989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:39.561 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.561 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:39.561 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:39.819 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:39.819 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:32:39.819 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:39.819 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:39.819 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:32:41.723 21:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:41.723 21:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:41.723 21:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:41.723 21:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:41.723 21:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:41.723 21:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:32:41.723 21:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:41.723 [global] 00:32:41.723 thread=1 00:32:41.723 invalidate=1 00:32:41.723 rw=write 00:32:41.723 time_based=1 00:32:41.723 runtime=1 00:32:41.723 ioengine=libaio 00:32:41.723 direct=1 00:32:41.723 bs=4096 00:32:41.723 iodepth=1 00:32:41.723 norandommap=0 00:32:41.723 numjobs=1 00:32:41.723 00:32:41.723 verify_dump=1 00:32:41.723 verify_backlog=512 00:32:41.723 verify_state_save=0 00:32:41.723 do_verify=1 00:32:41.723 verify=crc32c-intel 00:32:41.723 [job0] 00:32:41.723 filename=/dev/nvme0n1 00:32:41.723 Could not set queue depth (nvme0n1) 00:32:41.982 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:41.982 fio-3.35 00:32:41.982 Starting 1 thread 00:32:43.358 00:32:43.358 job0: (groupid=0, jobs=1): err= 0: pid=1534427: Thu Dec 5 21:25:51 2024 00:32:43.358 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:32:43.358 slat (nsec): min=6787, max=74612, avg=8240.36, stdev=1820.16 00:32:43.358 clat (usec): min=154, max=462, avg=208.64, stdev=22.70 00:32:43.358 lat (usec): min=188, max=498, avg=216.88, stdev=22.90 00:32:43.358 clat percentiles (usec): 00:32:43.358 | 1.00th=[ 186], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:32:43.358 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 202], 00:32:43.358 | 70.00th=[ 208], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 249], 00:32:43.358 | 99.00th=[ 255], 99.50th=[ 258], 99.90th=[ 338], 99.95th=[ 396], 00:32:43.358 | 99.99th=[ 465] 00:32:43.358 write: IOPS=2887, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1001msec); 0 zone resets 00:32:43.358 slat (nsec): min=9572, max=50298, avg=11717.60, stdev=1696.33 00:32:43.358 clat (usec): min=106, max=357, avg=136.11, stdev= 9.03 00:32:43.358 lat (usec): min=132, max=408, avg=147.83, stdev= 9.65 00:32:43.358 clat percentiles (usec): 00:32:43.358 | 1.00th=[ 126], 5.00th=[ 128], 10.00th=[ 130], 20.00th=[ 133], 00:32:43.358 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 135], 60.00th=[ 137], 00:32:43.358 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 143], 95.00th=[ 147], 00:32:43.358 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 227], 99.95th=[ 229], 00:32:43.358 | 99.99th=[ 359] 00:32:43.358 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:32:43.358 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:32:43.358 lat (usec) : 250=98.00%, 500=2.00% 00:32:43.358 cpu : usr=4.00%, sys=8.20%, ctx=5453, majf=0, minf=1 00:32:43.358 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:43.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.358 issued rwts: total=2560,2890,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.358 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:43.358 00:32:43.358 Run status group 0 (all jobs): 00:32:43.358 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:32:43.358 WRITE: bw=11.3MiB/s (11.8MB/s), 11.3MiB/s-11.3MiB/s (11.8MB/s-11.8MB/s), io=11.3MiB (11.8MB), run=1001-1001msec 00:32:43.358 00:32:43.359 Disk stats (read/write): 00:32:43.359 nvme0n1: ios=2347/2560, merge=0/0, ticks=1438/324, in_queue=1762, util=98.30% 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:43.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:43.359 rmmod nvme_tcp 00:32:43.359 rmmod nvme_fabrics 00:32:43.359 rmmod nvme_keyring 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1533663 ']' 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1533663 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1533663 ']' 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1533663 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1533663 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1533663' 00:32:43.359 killing process with pid 1533663 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1533663 00:32:43.359 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1533663 00:32:43.618 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:43.618 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:43.618 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:43.618 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:43.618 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:32:43.618 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:43.618 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:32:43.618 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:43.618 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:43.618 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.618 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:43.618 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.674 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:45.674 00:32:45.674 real 0m13.073s 00:32:45.674 user 0m23.856s 00:32:45.674 sys 0m6.183s 00:32:45.674 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.674 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:45.674 ************************************ 00:32:45.674 END TEST nvmf_nmic 00:32:45.674 ************************************ 00:32:45.674 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:45.674 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:45.674 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:45.674 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:45.967 ************************************ 00:32:45.967 START TEST nvmf_fio_target 00:32:45.967 ************************************ 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:45.967 * Looking for test storage... 00:32:45.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:45.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.967 --rc genhtml_branch_coverage=1 00:32:45.967 --rc genhtml_function_coverage=1 00:32:45.967 --rc genhtml_legend=1 00:32:45.967 --rc geninfo_all_blocks=1 00:32:45.967 --rc geninfo_unexecuted_blocks=1 00:32:45.967 00:32:45.967 ' 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:45.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.967 --rc genhtml_branch_coverage=1 00:32:45.967 --rc genhtml_function_coverage=1 00:32:45.967 --rc genhtml_legend=1 00:32:45.967 --rc geninfo_all_blocks=1 00:32:45.967 --rc geninfo_unexecuted_blocks=1 00:32:45.967 00:32:45.967 ' 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:45.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.967 --rc genhtml_branch_coverage=1 00:32:45.967 --rc genhtml_function_coverage=1 00:32:45.967 --rc genhtml_legend=1 00:32:45.967 --rc geninfo_all_blocks=1 00:32:45.967 --rc geninfo_unexecuted_blocks=1 00:32:45.967 00:32:45.967 ' 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:45.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.967 --rc genhtml_branch_coverage=1 00:32:45.967 --rc genhtml_function_coverage=1 00:32:45.967 --rc genhtml_legend=1 00:32:45.967 --rc geninfo_all_blocks=1 00:32:45.967 --rc geninfo_unexecuted_blocks=1 00:32:45.967 00:32:45.967 ' 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.967 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:45.968 21:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:52.543 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:52.544 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:52.544 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:52.544 Found net devices under 0000:86:00.0: cvl_0_0 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:52.544 Found net devices under 0000:86:00.1: cvl_0_1 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:52.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:52.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:32:52.544 00:32:52.544 --- 10.0.0.2 ping statistics --- 00:32:52.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.544 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:52.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:52.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:32:52.544 00:32:52.544 --- 10.0.0.1 ping statistics --- 00:32:52.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.544 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1537985 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1537985 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1537985 ']' 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.544 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:52.545 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.545 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:52.545 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.545 [2024-12-05 21:25:59.944846] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:52.545 [2024-12-05 21:25:59.945860] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:32:52.545 [2024-12-05 21:25:59.945898] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:52.545 [2024-12-05 21:26:00.026864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:52.545 [2024-12-05 21:26:00.077558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:52.545 [2024-12-05 21:26:00.077592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:52.545 [2024-12-05 21:26:00.077600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:52.545 [2024-12-05 21:26:00.077606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:52.545 [2024-12-05 21:26:00.077612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:52.545 [2024-12-05 21:26:00.078951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.545 [2024-12-05 21:26:00.078978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:52.545 [2024-12-05 21:26:00.079084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.545 [2024-12-05 21:26:00.079085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:52.545 [2024-12-05 21:26:00.150836] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:52.545 [2024-12-05 21:26:00.150854] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:52.545 [2024-12-05 21:26:00.151660] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:52.545 [2024-12-05 21:26:00.151684] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:52.545 [2024-12-05 21:26:00.151759] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:52.803 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:52.803 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:52.804 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:52.804 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:52.804 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.804 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:52.804 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:53.062 [2024-12-05 21:26:00.979896] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:53.062 21:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:53.320 21:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:53.321 21:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:53.579 21:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:53.579 21:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:53.579 21:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:53.579 21:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:53.838 21:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:53.838 21:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:54.096 21:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:54.354 21:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:54.354 21:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:54.613 21:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:54.613 21:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:54.613 21:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:54.613 21:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:54.871 21:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:55.130 21:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:55.130 21:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:55.388 21:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:55.388 21:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:55.388 21:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:55.645 [2024-12-05 21:26:03.611831] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:55.645 21:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:55.903 21:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:56.162 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:56.162 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:56.162 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:56.162 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:56.162 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:56.162 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:56.162 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:58.685 21:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:58.685 21:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:58.685 21:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:58.685 21:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:58.685 21:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:58.685 21:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:58.685 21:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:58.685 [global] 00:32:58.685 thread=1 00:32:58.685 invalidate=1 00:32:58.685 rw=write 00:32:58.685 time_based=1 00:32:58.685 runtime=1 00:32:58.685 ioengine=libaio 00:32:58.685 direct=1 00:32:58.685 bs=4096 00:32:58.685 iodepth=1 00:32:58.685 norandommap=0 00:32:58.685 numjobs=1 00:32:58.685 00:32:58.685 verify_dump=1 00:32:58.685 verify_backlog=512 00:32:58.685 verify_state_save=0 00:32:58.685 do_verify=1 00:32:58.685 verify=crc32c-intel 00:32:58.685 [job0] 00:32:58.685 filename=/dev/nvme0n1 00:32:58.685 [job1] 00:32:58.685 filename=/dev/nvme0n2 00:32:58.685 [job2] 00:32:58.685 filename=/dev/nvme0n3 00:32:58.685 [job3] 00:32:58.685 filename=/dev/nvme0n4 00:32:58.685 Could not set queue depth (nvme0n1) 00:32:58.685 Could not set queue depth (nvme0n2) 00:32:58.685 Could not set queue depth (nvme0n3) 00:32:58.685 Could not set queue depth (nvme0n4) 00:32:58.685 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.685 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.685 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.685 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.685 fio-3.35 00:32:58.685 Starting 4 threads 00:33:00.069 00:33:00.069 job0: (groupid=0, jobs=1): err= 0: pid=1539312: Thu Dec 5 21:26:07 2024 00:33:00.069 read: IOPS=23, BW=92.2KiB/s (94.4kB/s)(96.0KiB/1041msec) 00:33:00.069 slat (nsec): min=8486, max=25786, avg=23471.75, stdev=4412.91 00:33:00.069 clat (usec): min=268, max=41954, avg=39291.50, stdev=8315.15 00:33:00.069 lat (usec): min=278, max=41979, avg=39314.97, stdev=8317.99 00:33:00.069 clat percentiles (usec): 00:33:00.069 | 1.00th=[ 269], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:33:00.069 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:00.069 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:00.069 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:00.069 | 99.99th=[42206] 00:33:00.069 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:33:00.069 slat (nsec): min=10938, max=40650, avg=12581.96, stdev=2764.97 00:33:00.069 clat (usec): min=143, max=439, avg=173.61, stdev=21.37 00:33:00.069 lat (usec): min=154, max=459, avg=186.19, stdev=22.65 00:33:00.069 clat percentiles (usec): 00:33:00.069 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:33:00.069 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:33:00.069 | 70.00th=[ 178], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 192], 00:33:00.069 | 99.00th=[ 269], 99.50th=[ 322], 99.90th=[ 441], 99.95th=[ 441], 00:33:00.069 | 99.99th=[ 441] 00:33:00.069 bw ( KiB/s): min= 4096, max= 4096, per=15.30%, avg=4096.00, stdev= 0.00, samples=1 00:33:00.069 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:00.069 lat (usec) : 250=94.03%, 500=1.68% 00:33:00.069 lat (msec) : 50=4.29% 00:33:00.069 cpu : usr=0.10%, sys=1.25%, ctx=539, majf=0, minf=1 00:33:00.069 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:00.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.069 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.069 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:00.069 job1: (groupid=0, jobs=1): err= 0: pid=1539313: Thu Dec 5 21:26:07 2024 00:33:00.069 read: IOPS=1035, BW=4143KiB/s (4242kB/s)(4176KiB/1008msec) 00:33:00.069 slat (nsec): min=3773, max=23293, avg=6617.89, stdev=1972.67 00:33:00.069 clat (usec): min=207, max=42519, avg=699.32, stdev=4286.66 00:33:00.069 lat (usec): min=213, max=42543, avg=705.94, stdev=4288.05 00:33:00.069 clat percentiles (usec): 00:33:00.069 | 1.00th=[ 212], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 225], 00:33:00.069 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:33:00.069 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 260], 00:33:00.069 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[42730], 00:33:00.069 | 99.99th=[42730] 00:33:00.069 write: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec); 0 zone resets 00:33:00.069 slat (nsec): min=6589, max=46093, avg=8667.56, stdev=2354.18 00:33:00.069 clat (usec): min=122, max=468, avg=164.10, stdev=21.44 00:33:00.069 lat (usec): min=131, max=482, avg=172.77, stdev=22.20 00:33:00.069 clat percentiles (usec): 00:33:00.069 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:33:00.069 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:33:00.069 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 196], 00:33:00.069 | 99.00th=[ 223], 99.50th=[ 258], 99.90th=[ 383], 99.95th=[ 469], 00:33:00.069 | 99.99th=[ 469] 00:33:00.069 bw ( KiB/s): min= 1008, max=11280, per=22.95%, avg=6144.00, stdev=7263.40, samples=2 00:33:00.069 iops : min= 252, max= 2820, avg=1536.00, stdev=1815.85, samples=2 00:33:00.069 lat (usec) : 250=92.09%, 500=7.44% 00:33:00.069 lat (msec) : 50=0.47% 00:33:00.069 cpu : usr=1.29%, sys=1.79%, ctx=2580, majf=0, minf=2 00:33:00.069 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:00.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.069 issued rwts: total=1044,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.069 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:00.069 job2: (groupid=0, jobs=1): err= 0: pid=1539314: Thu Dec 5 21:26:07 2024 00:33:00.069 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:33:00.069 slat (nsec): min=6744, max=32066, avg=8569.38, stdev=1202.53 00:33:00.069 clat (usec): min=187, max=1270, avg=247.08, stdev=37.88 00:33:00.069 lat (usec): min=199, max=1278, avg=255.64, stdev=38.19 00:33:00.069 clat percentiles (usec): 00:33:00.069 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 227], 20.00th=[ 235], 00:33:00.069 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 247], 00:33:00.069 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 269], 00:33:00.069 | 99.00th=[ 469], 99.50th=[ 486], 99.90th=[ 502], 99.95th=[ 502], 00:33:00.069 | 99.99th=[ 1270] 00:33:00.069 write: IOPS=2357, BW=9431KiB/s (9657kB/s)(9440KiB/1001msec); 0 zone resets 00:33:00.069 slat (nsec): min=4141, max=44685, avg=11542.78, stdev=2273.11 00:33:00.069 clat (usec): min=142, max=462, avg=184.96, stdev=28.62 00:33:00.069 lat (usec): min=152, max=476, avg=196.51, stdev=28.90 00:33:00.069 clat percentiles (usec): 00:33:00.069 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:33:00.069 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:33:00.069 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 219], 95.00th=[ 247], 00:33:00.069 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 359], 99.95th=[ 367], 00:33:00.069 | 99.99th=[ 465] 00:33:00.070 bw ( KiB/s): min= 8984, max= 8984, per=33.55%, avg=8984.00, stdev= 0.00, samples=1 00:33:00.070 iops : min= 2246, max= 2246, avg=2246.00, stdev= 0.00, samples=1 00:33:00.070 lat (usec) : 250=84.85%, 500=15.06%, 750=0.07% 00:33:00.070 lat (msec) : 2=0.02% 00:33:00.070 cpu : usr=3.00%, sys=7.30%, ctx=4408, majf=0, minf=2 00:33:00.070 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:00.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.070 issued rwts: total=2048,2360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.070 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:00.070 job3: (groupid=0, jobs=1): err= 0: pid=1539315: Thu Dec 5 21:26:07 2024 00:33:00.070 read: IOPS=2127, BW=8511KiB/s (8716kB/s)(8520KiB/1001msec) 00:33:00.070 slat (nsec): min=6773, max=39754, avg=7797.20, stdev=1384.28 00:33:00.070 clat (usec): min=191, max=2875, avg=232.70, stdev=60.37 00:33:00.070 lat (usec): min=199, max=2882, avg=240.50, stdev=60.41 00:33:00.070 clat percentiles (usec): 00:33:00.070 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 217], 00:33:00.070 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:33:00.070 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 251], 95.00th=[ 258], 00:33:00.070 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 400], 99.95th=[ 474], 00:33:00.070 | 99.99th=[ 2868] 00:33:00.070 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:33:00.070 slat (usec): min=7, max=889, avg=11.52, stdev=24.11 00:33:00.070 clat (usec): min=135, max=400, avg=175.23, stdev=23.70 00:33:00.070 lat (usec): min=145, max=1123, avg=186.76, stdev=35.05 00:33:00.070 clat percentiles (usec): 00:33:00.070 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:33:00.070 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 174], 00:33:00.070 | 70.00th=[ 184], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 212], 00:33:00.070 | 99.00th=[ 255], 99.50th=[ 273], 99.90th=[ 351], 99.95th=[ 367], 00:33:00.070 | 99.99th=[ 400] 00:33:00.070 bw ( KiB/s): min=10160, max=10160, per=37.95%, avg=10160.00, stdev= 0.00, samples=1 00:33:00.070 iops : min= 2540, max= 2540, avg=2540.00, stdev= 0.00, samples=1 00:33:00.070 lat (usec) : 250=93.90%, 500=6.08% 00:33:00.070 lat (msec) : 4=0.02% 00:33:00.070 cpu : usr=2.20%, sys=4.60%, ctx=4693, majf=0, minf=1 00:33:00.070 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:00.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.070 issued rwts: total=2130,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.070 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:00.070 00:33:00.070 Run status group 0 (all jobs): 00:33:00.070 READ: bw=19.7MiB/s (20.6MB/s), 92.2KiB/s-8511KiB/s (94.4kB/s-8716kB/s), io=20.5MiB (21.5MB), run=1001-1041msec 00:33:00.070 WRITE: bw=26.1MiB/s (27.4MB/s), 1967KiB/s-9.99MiB/s (2015kB/s-10.5MB/s), io=27.2MiB (28.5MB), run=1001-1041msec 00:33:00.070 00:33:00.070 Disk stats (read/write): 00:33:00.070 nvme0n1: ios=71/512, merge=0/0, ticks=1576/84, in_queue=1660, util=97.90% 00:33:00.070 nvme0n2: ios=1057/1536, merge=0/0, ticks=574/245, in_queue=819, util=87.08% 00:33:00.070 nvme0n3: ios=1685/2048, merge=0/0, ticks=403/349, in_queue=752, util=88.95% 00:33:00.070 nvme0n4: ios=1944/2048, merge=0/0, ticks=622/355, in_queue=977, util=98.42% 00:33:00.070 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:33:00.070 [global] 00:33:00.070 thread=1 00:33:00.070 invalidate=1 00:33:00.070 rw=randwrite 00:33:00.070 time_based=1 00:33:00.070 runtime=1 00:33:00.070 ioengine=libaio 00:33:00.070 direct=1 00:33:00.070 bs=4096 00:33:00.070 iodepth=1 00:33:00.070 norandommap=0 00:33:00.070 numjobs=1 00:33:00.070 00:33:00.070 verify_dump=1 00:33:00.070 verify_backlog=512 00:33:00.070 verify_state_save=0 00:33:00.070 do_verify=1 00:33:00.070 verify=crc32c-intel 00:33:00.070 [job0] 00:33:00.070 filename=/dev/nvme0n1 00:33:00.070 [job1] 00:33:00.070 filename=/dev/nvme0n2 00:33:00.070 [job2] 00:33:00.070 filename=/dev/nvme0n3 00:33:00.070 [job3] 00:33:00.070 filename=/dev/nvme0n4 00:33:00.070 Could not set queue depth (nvme0n1) 00:33:00.070 Could not set queue depth (nvme0n2) 00:33:00.070 Could not set queue depth (nvme0n3) 00:33:00.070 Could not set queue depth (nvme0n4) 00:33:00.327 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:00.327 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:00.327 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:00.327 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:00.327 fio-3.35 00:33:00.327 Starting 4 threads 00:33:01.714 00:33:01.714 job0: (groupid=0, jobs=1): err= 0: pid=1539680: Thu Dec 5 21:26:09 2024 00:33:01.714 read: IOPS=22, BW=91.4KiB/s (93.6kB/s)(92.0KiB/1007msec) 00:33:01.714 slat (nsec): min=10513, max=24935, avg=20217.91, stdev=3180.96 00:33:01.714 clat (usec): min=440, max=41080, avg=39178.88, stdev=8445.44 00:33:01.714 lat (usec): min=465, max=41101, avg=39199.10, stdev=8444.45 00:33:01.714 clat percentiles (usec): 00:33:01.714 | 1.00th=[ 441], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:33:01.714 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:01.714 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:01.714 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:01.714 | 99.99th=[41157] 00:33:01.714 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:33:01.714 slat (nsec): min=10001, max=39389, avg=12277.28, stdev=2297.38 00:33:01.714 clat (usec): min=148, max=637, avg=189.40, stdev=35.59 00:33:01.714 lat (usec): min=160, max=649, avg=201.68, stdev=35.86 00:33:01.714 clat percentiles (usec): 00:33:01.714 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:33:01.714 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:33:01.714 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 231], 00:33:01.714 | 99.00th=[ 265], 99.50th=[ 469], 99.90th=[ 635], 99.95th=[ 635], 00:33:01.714 | 99.99th=[ 635] 00:33:01.714 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:33:01.714 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:01.714 lat (usec) : 250=93.27%, 500=2.24%, 750=0.37% 00:33:01.714 lat (msec) : 50=4.11% 00:33:01.714 cpu : usr=0.70%, sys=0.70%, ctx=535, majf=0, minf=1 00:33:01.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.714 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:01.714 job1: (groupid=0, jobs=1): err= 0: pid=1539683: Thu Dec 5 21:26:09 2024 00:33:01.714 read: IOPS=1986, BW=7946KiB/s (8136kB/s)(8200KiB/1032msec) 00:33:01.714 slat (nsec): min=7591, max=46257, avg=8804.07, stdev=1816.42 00:33:01.714 clat (usec): min=182, max=40882, avg=267.35, stdev=1267.54 00:33:01.714 lat (usec): min=191, max=40905, avg=276.15, stdev=1267.79 00:33:01.714 clat percentiles (usec): 00:33:01.714 | 1.00th=[ 196], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 204], 00:33:01.714 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 237], 60.00th=[ 243], 00:33:01.714 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 251], 95.00th=[ 255], 00:33:01.714 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 457], 99.95th=[40633], 00:33:01.714 | 99.99th=[40633] 00:33:01.714 write: IOPS=2480, BW=9922KiB/s (10.2MB/s)(10.0MiB/1032msec); 0 zone resets 00:33:01.714 slat (nsec): min=3431, max=42307, avg=11551.92, stdev=3255.57 00:33:01.714 clat (usec): min=126, max=1919, avg=164.87, stdev=48.20 00:33:01.714 lat (usec): min=131, max=1930, avg=176.42, stdev=48.09 00:33:01.714 clat percentiles (usec): 00:33:01.714 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:33:01.714 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 155], 00:33:01.714 | 70.00th=[ 172], 80.00th=[ 188], 90.00th=[ 210], 95.00th=[ 245], 00:33:01.714 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 343], 99.95th=[ 347], 00:33:01.714 | 99.99th=[ 1926] 00:33:01.714 bw ( KiB/s): min= 9736, max=10744, per=64.56%, avg=10240.00, stdev=712.76, samples=2 00:33:01.714 iops : min= 2434, max= 2686, avg=2560.00, stdev=178.19, samples=2 00:33:01.714 lat (usec) : 250=92.02%, 500=7.92% 00:33:01.714 lat (msec) : 2=0.02%, 50=0.04% 00:33:01.714 cpu : usr=3.78%, sys=6.98%, ctx=4612, majf=0, minf=1 00:33:01.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.714 issued rwts: total=2050,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:01.714 job2: (groupid=0, jobs=1): err= 0: pid=1539689: Thu Dec 5 21:26:09 2024 00:33:01.714 read: IOPS=160, BW=643KiB/s (658kB/s)(664KiB/1033msec) 00:33:01.714 slat (nsec): min=6877, max=27443, avg=9611.09, stdev=5423.91 00:33:01.714 clat (usec): min=204, max=41163, avg=5625.62, stdev=13847.32 00:33:01.714 lat (usec): min=212, max=41187, avg=5635.23, stdev=13852.14 00:33:01.714 clat percentiles (usec): 00:33:01.714 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 219], 00:33:01.714 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 231], 00:33:01.714 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[41157], 95.00th=[41157], 00:33:01.714 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:01.714 | 99.99th=[41157] 00:33:01.714 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:33:01.714 slat (nsec): min=9678, max=40350, avg=10865.58, stdev=1561.40 00:33:01.714 clat (usec): min=156, max=319, avg=175.06, stdev=11.55 00:33:01.714 lat (usec): min=167, max=359, avg=185.92, stdev=12.41 00:33:01.714 clat percentiles (usec): 00:33:01.714 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 167], 00:33:01.714 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:33:01.714 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:33:01.714 | 99.00th=[ 206], 99.50th=[ 210], 99.90th=[ 318], 99.95th=[ 318], 00:33:01.714 | 99.99th=[ 318] 00:33:01.714 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:33:01.714 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:01.714 lat (usec) : 250=95.87%, 500=0.74%, 750=0.15% 00:33:01.714 lat (msec) : 50=3.24% 00:33:01.714 cpu : usr=0.29%, sys=0.68%, ctx=679, majf=0, minf=1 00:33:01.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.714 issued rwts: total=166,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:01.714 job3: (groupid=0, jobs=1): err= 0: pid=1539691: Thu Dec 5 21:26:09 2024 00:33:01.714 read: IOPS=44, BW=180KiB/s (184kB/s)(180KiB/1001msec) 00:33:01.714 slat (nsec): min=8627, max=29903, avg=16431.58, stdev=7303.98 00:33:01.714 clat (usec): min=385, max=41085, avg=19320.81, stdev=20460.90 00:33:01.714 lat (usec): min=395, max=41108, avg=19337.24, stdev=20467.21 00:33:01.714 clat percentiles (usec): 00:33:01.714 | 1.00th=[ 388], 5.00th=[ 388], 10.00th=[ 388], 20.00th=[ 392], 00:33:01.714 | 30.00th=[ 392], 40.00th=[ 396], 50.00th=[ 412], 60.00th=[40633], 00:33:01.714 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:01.714 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:01.714 | 99.99th=[41157] 00:33:01.714 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:33:01.714 slat (usec): min=11, max=24954, avg=61.45, stdev=1102.27 00:33:01.714 clat (usec): min=153, max=976, avg=189.13, stdev=50.41 00:33:01.714 lat (usec): min=165, max=25141, avg=250.59, stdev=1103.34 00:33:01.714 clat percentiles (usec): 00:33:01.714 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 169], 00:33:01.714 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 184], 00:33:01.714 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 237], 95.00th=[ 243], 00:33:01.714 | 99.00th=[ 326], 99.50th=[ 553], 99.90th=[ 979], 99.95th=[ 979], 00:33:01.714 | 99.99th=[ 979] 00:33:01.714 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:33:01.714 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:01.714 lat (usec) : 250=88.87%, 500=6.82%, 750=0.36%, 1000=0.18% 00:33:01.714 lat (msec) : 50=3.77% 00:33:01.714 cpu : usr=0.60%, sys=0.90%, ctx=559, majf=0, minf=1 00:33:01.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.714 issued rwts: total=45,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:01.714 00:33:01.714 Run status group 0 (all jobs): 00:33:01.714 READ: bw=8844KiB/s (9056kB/s), 91.4KiB/s-7946KiB/s (93.6kB/s-8136kB/s), io=9136KiB (9355kB), run=1001-1033msec 00:33:01.714 WRITE: bw=15.5MiB/s (16.2MB/s), 1983KiB/s-9922KiB/s (2030kB/s-10.2MB/s), io=16.0MiB (16.8MB), run=1001-1033msec 00:33:01.714 00:33:01.714 Disk stats (read/write): 00:33:01.714 nvme0n1: ios=66/512, merge=0/0, ticks=715/93, in_queue=808, util=82.05% 00:33:01.714 nvme0n2: ios=1820/2048, merge=0/0, ticks=601/330, in_queue=931, util=97.53% 00:33:01.715 nvme0n3: ios=195/512, merge=0/0, ticks=1527/89, in_queue=1616, util=99.02% 00:33:01.715 nvme0n4: ios=74/512, merge=0/0, ticks=1024/93, in_queue=1117, util=97.57% 00:33:01.715 21:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:01.715 [global] 00:33:01.715 thread=1 00:33:01.715 invalidate=1 00:33:01.715 rw=write 00:33:01.715 time_based=1 00:33:01.715 runtime=1 00:33:01.715 ioengine=libaio 00:33:01.715 direct=1 00:33:01.715 bs=4096 00:33:01.715 iodepth=128 00:33:01.715 norandommap=0 00:33:01.715 numjobs=1 00:33:01.715 00:33:01.715 verify_dump=1 00:33:01.715 verify_backlog=512 00:33:01.715 verify_state_save=0 00:33:01.715 do_verify=1 00:33:01.715 verify=crc32c-intel 00:33:01.715 [job0] 00:33:01.715 filename=/dev/nvme0n1 00:33:01.715 [job1] 00:33:01.715 filename=/dev/nvme0n2 00:33:01.715 [job2] 00:33:01.715 filename=/dev/nvme0n3 00:33:01.715 [job3] 00:33:01.715 filename=/dev/nvme0n4 00:33:01.715 Could not set queue depth (nvme0n1) 00:33:01.715 Could not set queue depth (nvme0n2) 00:33:01.715 Could not set queue depth (nvme0n3) 00:33:01.715 Could not set queue depth (nvme0n4) 00:33:01.971 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:01.971 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:01.971 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:01.971 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:01.971 fio-3.35 00:33:01.971 Starting 4 threads 00:33:03.341 00:33:03.341 job0: (groupid=0, jobs=1): err= 0: pid=1540060: Thu Dec 5 21:26:11 2024 00:33:03.341 read: IOPS=3609, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1010msec) 00:33:03.341 slat (nsec): min=1340, max=17247k, avg=141788.35, stdev=959787.92 00:33:03.341 clat (usec): min=4035, max=68026, avg=15140.14, stdev=11736.80 00:33:03.341 lat (usec): min=4044, max=68035, avg=15281.93, stdev=11836.64 00:33:03.341 clat percentiles (usec): 00:33:03.341 | 1.00th=[ 6456], 5.00th=[ 7767], 10.00th=[ 8356], 20.00th=[ 9110], 00:33:03.341 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:33:03.341 | 70.00th=[13173], 80.00th=[17171], 90.00th=[24511], 95.00th=[41681], 00:33:03.341 | 99.00th=[65799], 99.50th=[66847], 99.90th=[67634], 99.95th=[67634], 00:33:03.341 | 99.99th=[67634] 00:33:03.341 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:33:03.341 slat (usec): min=2, max=22251, avg=112.40, stdev=692.67 00:33:03.341 clat (usec): min=1030, max=67995, avg=17811.50, stdev=10456.85 00:33:03.341 lat (usec): min=1039, max=67999, avg=17923.89, stdev=10499.05 00:33:03.341 clat percentiles (usec): 00:33:03.341 | 1.00th=[ 4359], 5.00th=[ 5866], 10.00th=[ 8160], 20.00th=[ 9503], 00:33:03.341 | 30.00th=[11076], 40.00th=[12125], 50.00th=[18482], 60.00th=[20317], 00:33:03.341 | 70.00th=[20841], 80.00th=[22938], 90.00th=[25035], 95.00th=[38011], 00:33:03.341 | 99.00th=[58983], 99.50th=[60031], 99.90th=[66847], 99.95th=[67634], 00:33:03.341 | 99.99th=[67634] 00:33:03.341 bw ( KiB/s): min=12288, max=19952, per=22.11%, avg=16120.00, stdev=5419.27, samples=2 00:33:03.341 iops : min= 3072, max= 4988, avg=4030.00, stdev=1354.82, samples=2 00:33:03.341 lat (msec) : 2=0.03%, 4=0.32%, 10=22.36%, 20=48.71%, 50=25.01% 00:33:03.341 lat (msec) : 100=3.58% 00:33:03.341 cpu : usr=2.87%, sys=4.86%, ctx=353, majf=0, minf=1 00:33:03.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:03.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:03.341 issued rwts: total=3646,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:03.341 job1: (groupid=0, jobs=1): err= 0: pid=1540061: Thu Dec 5 21:26:11 2024 00:33:03.341 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:33:03.341 slat (nsec): min=1104, max=14634k, avg=88946.27, stdev=654145.47 00:33:03.341 clat (usec): min=6482, max=42369, avg=11482.39, stdev=5237.38 00:33:03.341 lat (usec): min=6488, max=42396, avg=11571.34, stdev=5291.91 00:33:03.341 clat percentiles (usec): 00:33:03.341 | 1.00th=[ 7177], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[ 8848], 00:33:03.341 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10159], 00:33:03.341 | 70.00th=[11076], 80.00th=[11863], 90.00th=[13566], 95.00th=[26870], 00:33:03.341 | 99.00th=[32375], 99.50th=[36963], 99.90th=[36963], 99.95th=[41157], 00:33:03.341 | 99.99th=[42206] 00:33:03.341 write: IOPS=5571, BW=21.8MiB/s (22.8MB/s)(21.9MiB/1007msec); 0 zone resets 00:33:03.341 slat (nsec): min=1915, max=24432k, avg=90730.06, stdev=607768.09 00:33:03.341 clat (usec): min=4009, max=48463, avg=12128.46, stdev=6137.12 00:33:03.341 lat (usec): min=5153, max=48487, avg=12219.19, stdev=6185.90 00:33:03.341 clat percentiles (usec): 00:33:03.341 | 1.00th=[ 6063], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9503], 00:33:03.341 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9896], 00:33:03.341 | 70.00th=[10159], 80.00th=[12125], 90.00th=[23725], 95.00th=[26870], 00:33:03.341 | 99.00th=[35390], 99.50th=[36439], 99.90th=[40633], 99.95th=[41157], 00:33:03.341 | 99.99th=[48497] 00:33:03.341 bw ( KiB/s): min=21896, max=21960, per=30.08%, avg=21928.00, stdev=45.25, samples=2 00:33:03.341 iops : min= 5474, max= 5490, avg=5482.00, stdev=11.31, samples=2 00:33:03.341 lat (msec) : 10=61.83%, 20=28.07%, 50=10.10% 00:33:03.341 cpu : usr=4.87%, sys=6.46%, ctx=365, majf=0, minf=1 00:33:03.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:33:03.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:03.341 issued rwts: total=5120,5610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:03.341 job2: (groupid=0, jobs=1): err= 0: pid=1540062: Thu Dec 5 21:26:11 2024 00:33:03.341 read: IOPS=2790, BW=10.9MiB/s (11.4MB/s)(11.0MiB/1005msec) 00:33:03.342 slat (nsec): min=1383, max=13996k, avg=130396.66, stdev=933413.60 00:33:03.342 clat (usec): min=3408, max=50404, avg=17833.44, stdev=8913.79 00:33:03.342 lat (usec): min=3875, max=50415, avg=17963.83, stdev=8941.59 00:33:03.342 clat percentiles (usec): 00:33:03.342 | 1.00th=[ 7439], 5.00th=[ 9765], 10.00th=[11731], 20.00th=[12125], 00:33:03.342 | 30.00th=[12256], 40.00th=[12649], 50.00th=[14746], 60.00th=[17433], 00:33:03.342 | 70.00th=[20055], 80.00th=[22152], 90.00th=[27395], 95.00th=[33162], 00:33:03.342 | 99.00th=[50070], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:33:03.342 | 99.99th=[50594] 00:33:03.342 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:33:03.342 slat (usec): min=2, max=37702, avg=176.40, stdev=1105.22 00:33:03.342 clat (usec): min=1151, max=82156, avg=25117.81, stdev=14190.54 00:33:03.342 lat (usec): min=1161, max=82167, avg=25294.20, stdev=14254.61 00:33:03.342 clat percentiles (usec): 00:33:03.342 | 1.00th=[ 4686], 5.00th=[ 9110], 10.00th=[10683], 20.00th=[16319], 00:33:03.342 | 30.00th=[19268], 40.00th=[20579], 50.00th=[20841], 60.00th=[22152], 00:33:03.342 | 70.00th=[23725], 80.00th=[33162], 90.00th=[49021], 95.00th=[53740], 00:33:03.342 | 99.00th=[76022], 99.50th=[79168], 99.90th=[81265], 99.95th=[81265], 00:33:03.342 | 99.99th=[82314] 00:33:03.342 bw ( KiB/s): min=12288, max=12288, per=16.85%, avg=12288.00, stdev= 0.00, samples=2 00:33:03.342 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:33:03.342 lat (msec) : 2=0.05%, 4=0.49%, 10=5.99%, 20=44.62%, 50=42.58% 00:33:03.342 lat (msec) : 100=6.26% 00:33:03.342 cpu : usr=1.99%, sys=3.88%, ctx=340, majf=0, minf=1 00:33:03.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:33:03.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:03.342 issued rwts: total=2804,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:03.342 job3: (groupid=0, jobs=1): err= 0: pid=1540063: Thu Dec 5 21:26:11 2024 00:33:03.342 read: IOPS=5616, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:33:03.342 slat (nsec): min=1406, max=3036.6k, avg=87841.11, stdev=427010.09 00:33:03.342 clat (usec): min=407, max=14591, avg=11220.54, stdev=1426.42 00:33:03.342 lat (usec): min=2902, max=14596, avg=11308.38, stdev=1390.85 00:33:03.342 clat percentiles (usec): 00:33:03.342 | 1.00th=[ 5473], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10159], 00:33:03.342 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:33:03.342 | 70.00th=[11863], 80.00th=[12256], 90.00th=[12780], 95.00th=[13042], 00:33:03.342 | 99.00th=[13566], 99.50th=[13698], 99.90th=[14484], 99.95th=[14615], 00:33:03.342 | 99.99th=[14615] 00:33:03.342 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:33:03.342 slat (usec): min=2, max=9051, avg=84.94, stdev=402.19 00:33:03.342 clat (usec): min=1193, max=22378, avg=11246.28, stdev=1493.58 00:33:03.342 lat (usec): min=1203, max=22385, avg=11331.22, stdev=1471.74 00:33:03.342 clat percentiles (usec): 00:33:03.342 | 1.00th=[ 7767], 5.00th=[ 9241], 10.00th=[ 9372], 20.00th=[10159], 00:33:03.342 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:33:03.342 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12649], 95.00th=[13566], 00:33:03.342 | 99.00th=[16057], 99.50th=[16057], 99.90th=[22414], 99.95th=[22414], 00:33:03.342 | 99.99th=[22414] 00:33:03.342 bw ( KiB/s): min=22080, max=22080, per=30.28%, avg=22080.00, stdev= 0.00, samples=1 00:33:03.342 iops : min= 5520, max= 5520, avg=5520.00, stdev= 0.00, samples=1 00:33:03.342 lat (usec) : 500=0.01% 00:33:03.342 lat (msec) : 2=0.02%, 4=0.28%, 10=18.49%, 20=81.07%, 50=0.12% 00:33:03.342 cpu : usr=3.10%, sys=5.09%, ctx=633, majf=0, minf=1 00:33:03.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:33:03.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:03.342 issued rwts: total=5628,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:03.342 00:33:03.342 Run status group 0 (all jobs): 00:33:03.342 READ: bw=66.5MiB/s (69.7MB/s), 10.9MiB/s-21.9MiB/s (11.4MB/s-23.0MB/s), io=67.2MiB (70.4MB), run=1002-1010msec 00:33:03.342 WRITE: bw=71.2MiB/s (74.7MB/s), 11.9MiB/s-22.0MiB/s (12.5MB/s-23.0MB/s), io=71.9MiB (75.4MB), run=1002-1010msec 00:33:03.342 00:33:03.342 Disk stats (read/write): 00:33:03.342 nvme0n1: ios=3121/3503, merge=0/0, ticks=43004/56114, in_queue=99118, util=82.16% 00:33:03.342 nvme0n2: ios=4633/4786, merge=0/0, ticks=25092/23797, in_queue=48889, util=97.53% 00:33:03.342 nvme0n3: ios=2107/2559, merge=0/0, ticks=33944/61401, in_queue=95345, util=97.73% 00:33:03.342 nvme0n4: ios=4278/4608, merge=0/0, ticks=13106/13681, in_queue=26787, util=97.36% 00:33:03.342 21:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:03.342 [global] 00:33:03.342 thread=1 00:33:03.342 invalidate=1 00:33:03.342 rw=randwrite 00:33:03.342 time_based=1 00:33:03.342 runtime=1 00:33:03.342 ioengine=libaio 00:33:03.342 direct=1 00:33:03.342 bs=4096 00:33:03.342 iodepth=128 00:33:03.342 norandommap=0 00:33:03.342 numjobs=1 00:33:03.342 00:33:03.342 verify_dump=1 00:33:03.342 verify_backlog=512 00:33:03.342 verify_state_save=0 00:33:03.342 do_verify=1 00:33:03.342 verify=crc32c-intel 00:33:03.342 [job0] 00:33:03.342 filename=/dev/nvme0n1 00:33:03.342 [job1] 00:33:03.342 filename=/dev/nvme0n2 00:33:03.342 [job2] 00:33:03.342 filename=/dev/nvme0n3 00:33:03.342 [job3] 00:33:03.342 filename=/dev/nvme0n4 00:33:03.342 Could not set queue depth (nvme0n1) 00:33:03.342 Could not set queue depth (nvme0n2) 00:33:03.342 Could not set queue depth (nvme0n3) 00:33:03.342 Could not set queue depth (nvme0n4) 00:33:03.599 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:03.599 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:03.599 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:03.599 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:03.599 fio-3.35 00:33:03.599 Starting 4 threads 00:33:04.973 00:33:04.973 job0: (groupid=0, jobs=1): err= 0: pid=1540431: Thu Dec 5 21:26:12 2024 00:33:04.973 read: IOPS=3926, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1004msec) 00:33:04.973 slat (nsec): min=1243, max=13795k, avg=117426.27, stdev=857054.96 00:33:04.973 clat (usec): min=1771, max=49590, avg=14949.15, stdev=6067.79 00:33:04.973 lat (usec): min=4170, max=49603, avg=15066.58, stdev=6135.70 00:33:04.973 clat percentiles (usec): 00:33:04.973 | 1.00th=[ 6718], 5.00th=[10028], 10.00th=[10421], 20.00th=[10945], 00:33:04.973 | 30.00th=[11338], 40.00th=[12780], 50.00th=[13829], 60.00th=[14353], 00:33:04.973 | 70.00th=[15270], 80.00th=[17171], 90.00th=[21103], 95.00th=[25822], 00:33:04.973 | 99.00th=[42730], 99.50th=[46400], 99.90th=[49546], 99.95th=[49546], 00:33:04.973 | 99.99th=[49546] 00:33:04.973 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:33:04.973 slat (usec): min=2, max=36505, avg=112.76, stdev=904.83 00:33:04.973 clat (usec): min=1095, max=58000, avg=16677.19, stdev=11176.09 00:33:04.973 lat (usec): min=1135, max=58007, avg=16789.95, stdev=11228.88 00:33:04.973 clat percentiles (usec): 00:33:04.973 | 1.00th=[ 3163], 5.00th=[ 6325], 10.00th=[ 7308], 20.00th=[ 9110], 00:33:04.973 | 30.00th=[10159], 40.00th=[11207], 50.00th=[11731], 60.00th=[14222], 00:33:04.973 | 70.00th=[17695], 80.00th=[21627], 90.00th=[37487], 95.00th=[41157], 00:33:04.973 | 99.00th=[51643], 99.50th=[54264], 99.90th=[56886], 99.95th=[57934], 00:33:04.973 | 99.99th=[57934] 00:33:04.973 bw ( KiB/s): min=16384, max=16384, per=20.82%, avg=16384.00, stdev= 0.00, samples=2 00:33:04.973 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:33:04.973 lat (msec) : 2=0.22%, 4=0.78%, 10=16.53%, 20=63.01%, 50=18.80% 00:33:04.973 lat (msec) : 100=0.65% 00:33:04.973 cpu : usr=2.59%, sys=5.18%, ctx=347, majf=0, minf=1 00:33:04.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:04.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:04.973 issued rwts: total=3942,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:04.973 job1: (groupid=0, jobs=1): err= 0: pid=1540432: Thu Dec 5 21:26:12 2024 00:33:04.973 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec) 00:33:04.973 slat (nsec): min=1372, max=10341k, avg=81377.58, stdev=650781.62 00:33:04.973 clat (usec): min=3585, max=24925, avg=10685.53, stdev=2784.06 00:33:04.973 lat (usec): min=3596, max=24930, avg=10766.91, stdev=2826.39 00:33:04.973 clat percentiles (usec): 00:33:04.973 | 1.00th=[ 5735], 5.00th=[ 7308], 10.00th=[ 7832], 20.00th=[ 8586], 00:33:04.973 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10683], 00:33:04.973 | 70.00th=[11600], 80.00th=[12911], 90.00th=[14746], 95.00th=[15926], 00:33:04.973 | 99.00th=[19268], 99.50th=[19530], 99.90th=[20841], 99.95th=[20841], 00:33:04.973 | 99.99th=[25035] 00:33:04.973 write: IOPS=6466, BW=25.3MiB/s (26.5MB/s)(25.5MiB/1008msec); 0 zone resets 00:33:04.973 slat (usec): min=2, max=15084, avg=70.58, stdev=541.75 00:33:04.973 clat (usec): min=1420, max=22598, avg=9525.67, stdev=2648.32 00:33:04.973 lat (usec): min=1434, max=22605, avg=9596.25, stdev=2677.56 00:33:04.973 clat percentiles (usec): 00:33:04.973 | 1.00th=[ 3490], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 7504], 00:33:04.973 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[ 9896], 00:33:04.973 | 70.00th=[10159], 80.00th=[11338], 90.00th=[12125], 95.00th=[13042], 00:33:04.973 | 99.00th=[17433], 99.50th=[22676], 99.90th=[22676], 99.95th=[22676], 00:33:04.973 | 99.99th=[22676] 00:33:04.973 bw ( KiB/s): min=24912, max=26216, per=32.49%, avg=25564.00, stdev=922.07, samples=2 00:33:04.973 iops : min= 6228, max= 6554, avg=6391.00, stdev=230.52, samples=2 00:33:04.973 lat (msec) : 2=0.08%, 4=0.76%, 10=58.29%, 20=40.38%, 50=0.49% 00:33:04.973 cpu : usr=4.87%, sys=7.55%, ctx=463, majf=0, minf=1 00:33:04.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:04.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:04.973 issued rwts: total=6144,6518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.974 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:04.974 job2: (groupid=0, jobs=1): err= 0: pid=1540433: Thu Dec 5 21:26:12 2024 00:33:04.974 read: IOPS=3247, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1003msec) 00:33:04.974 slat (nsec): min=1475, max=15459k, avg=135963.81, stdev=1005790.18 00:33:04.974 clat (usec): min=1119, max=75473, avg=18879.62, stdev=11058.30 00:33:04.974 lat (usec): min=2743, max=75480, avg=19015.59, stdev=11116.56 00:33:04.974 clat percentiles (usec): 00:33:04.974 | 1.00th=[ 6587], 5.00th=[ 7898], 10.00th=[ 9634], 20.00th=[11731], 00:33:04.974 | 30.00th=[12911], 40.00th=[14615], 50.00th=[15139], 60.00th=[16057], 00:33:04.974 | 70.00th=[20055], 80.00th=[25822], 90.00th=[31851], 95.00th=[42730], 00:33:04.974 | 99.00th=[61604], 99.50th=[63701], 99.90th=[74974], 99.95th=[74974], 00:33:04.974 | 99.99th=[74974] 00:33:04.974 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:33:04.974 slat (usec): min=2, max=23345, avg=129.04, stdev=1018.74 00:33:04.974 clat (usec): min=649, max=101869, avg=18338.06, stdev=13477.86 00:33:04.974 lat (usec): min=698, max=101874, avg=18467.09, stdev=13556.75 00:33:04.974 clat percentiles (msec): 00:33:04.974 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 10], 00:33:04.974 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 16], 00:33:04.974 | 70.00th=[ 21], 80.00th=[ 24], 90.00th=[ 37], 95.00th=[ 48], 00:33:04.974 | 99.00th=[ 73], 99.50th=[ 73], 99.90th=[ 99], 99.95th=[ 99], 00:33:04.974 | 99.99th=[ 103] 00:33:04.974 bw ( KiB/s): min=13608, max=15064, per=18.22%, avg=14336.00, stdev=1029.55, samples=2 00:33:04.974 iops : min= 3402, max= 3766, avg=3584.00, stdev=257.39, samples=2 00:33:04.974 lat (usec) : 750=0.03% 00:33:04.974 lat (msec) : 2=0.04%, 4=1.10%, 10=15.48%, 20=52.04%, 50=27.67% 00:33:04.974 lat (msec) : 100=3.63%, 250=0.01% 00:33:04.974 cpu : usr=2.99%, sys=3.99%, ctx=238, majf=0, minf=1 00:33:04.974 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:33:04.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:04.974 issued rwts: total=3257,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.974 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:04.974 job3: (groupid=0, jobs=1): err= 0: pid=1540434: Thu Dec 5 21:26:12 2024 00:33:04.974 read: IOPS=5096, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1006msec) 00:33:04.974 slat (nsec): min=1477, max=11110k, avg=98605.46, stdev=758766.61 00:33:04.974 clat (usec): min=4588, max=44371, avg=12508.95, stdev=4332.47 00:33:04.974 lat (usec): min=4683, max=44374, avg=12607.56, stdev=4391.27 00:33:04.974 clat percentiles (usec): 00:33:04.974 | 1.00th=[ 6915], 5.00th=[ 8225], 10.00th=[ 9372], 20.00th=[10159], 00:33:04.974 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11338], 60.00th=[11863], 00:33:04.974 | 70.00th=[13173], 80.00th=[14353], 90.00th=[16909], 95.00th=[19268], 00:33:04.974 | 99.00th=[33817], 99.50th=[42206], 99.90th=[43779], 99.95th=[44303], 00:33:04.974 | 99.99th=[44303] 00:33:04.974 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:33:04.974 slat (usec): min=2, max=13551, avg=80.66, stdev=571.16 00:33:04.974 clat (usec): min=474, max=44371, avg=11248.87, stdev=3650.32 00:33:04.974 lat (usec): min=506, max=44375, avg=11329.53, stdev=3675.93 00:33:04.974 clat percentiles (usec): 00:33:04.974 | 1.00th=[ 2737], 5.00th=[ 6259], 10.00th=[ 7308], 20.00th=[ 9503], 00:33:04.974 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11076], 60.00th=[11207], 00:33:04.974 | 70.00th=[12125], 80.00th=[13173], 90.00th=[13829], 95.00th=[16188], 00:33:04.974 | 99.00th=[28443], 99.50th=[31065], 99.90th=[33817], 99.95th=[33817], 00:33:04.974 | 99.99th=[44303] 00:33:04.974 bw ( KiB/s): min=20352, max=23744, per=28.02%, avg=22048.00, stdev=2398.51, samples=2 00:33:04.974 iops : min= 5088, max= 5936, avg=5512.00, stdev=599.63, samples=2 00:33:04.974 lat (usec) : 500=0.02% 00:33:04.974 lat (msec) : 2=0.02%, 4=1.23%, 10=21.23%, 20=74.65%, 50=2.85% 00:33:04.974 cpu : usr=4.38%, sys=6.17%, ctx=461, majf=0, minf=1 00:33:04.974 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:33:04.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:04.974 issued rwts: total=5127,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.974 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:04.974 00:33:04.974 Run status group 0 (all jobs): 00:33:04.974 READ: bw=71.6MiB/s (75.1MB/s), 12.7MiB/s-23.8MiB/s (13.3MB/s-25.0MB/s), io=72.1MiB (75.7MB), run=1003-1008msec 00:33:04.974 WRITE: bw=76.8MiB/s (80.6MB/s), 14.0MiB/s-25.3MiB/s (14.6MB/s-26.5MB/s), io=77.5MiB (81.2MB), run=1003-1008msec 00:33:04.974 00:33:04.974 Disk stats (read/write): 00:33:04.974 nvme0n1: ios=3106/3271, merge=0/0, ticks=44609/53213, in_queue=97822, util=99.60% 00:33:04.974 nvme0n2: ios=5238/5632, merge=0/0, ticks=53650/49837, in_queue=103487, util=98.27% 00:33:04.974 nvme0n3: ios=2593/2944, merge=0/0, ticks=30036/34034, in_queue=64070, util=97.71% 00:33:04.974 nvme0n4: ios=4650/4608, merge=0/0, ticks=56100/48972, in_queue=105072, util=98.22% 00:33:04.974 21:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:33:04.974 21:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1540667 00:33:04.974 21:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:33:04.974 21:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:33:04.974 [global] 00:33:04.974 thread=1 00:33:04.974 invalidate=1 00:33:04.974 rw=read 00:33:04.974 time_based=1 00:33:04.974 runtime=10 00:33:04.974 ioengine=libaio 00:33:04.974 direct=1 00:33:04.974 bs=4096 00:33:04.974 iodepth=1 00:33:04.974 norandommap=1 00:33:04.974 numjobs=1 00:33:04.974 00:33:04.974 [job0] 00:33:04.974 filename=/dev/nvme0n1 00:33:04.974 [job1] 00:33:04.974 filename=/dev/nvme0n2 00:33:04.974 [job2] 00:33:04.974 filename=/dev/nvme0n3 00:33:04.974 [job3] 00:33:04.974 filename=/dev/nvme0n4 00:33:04.974 Could not set queue depth (nvme0n1) 00:33:04.974 Could not set queue depth (nvme0n2) 00:33:04.974 Could not set queue depth (nvme0n3) 00:33:04.974 Could not set queue depth (nvme0n4) 00:33:04.974 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:04.974 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:04.974 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:04.974 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:04.974 fio-3.35 00:33:04.974 Starting 4 threads 00:33:08.249 21:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:33:08.249 21:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:33:08.249 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=9580544, buflen=4096 00:33:08.249 fio: pid=1540837, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:08.249 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2404352, buflen=4096 00:33:08.249 fio: pid=1540832, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:08.249 21:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:08.249 21:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:33:08.249 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11145216, buflen=4096 00:33:08.249 fio: pid=1540814, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:08.249 21:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:08.249 21:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:33:08.507 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=57643008, buflen=4096 00:33:08.507 fio: pid=1540822, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:08.507 21:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:08.507 21:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:33:08.507 00:33:08.507 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1540814: Thu Dec 5 21:26:16 2024 00:33:08.507 read: IOPS=862, BW=3448KiB/s (3530kB/s)(10.6MiB/3157msec) 00:33:08.507 slat (usec): min=6, max=18401, avg=19.60, stdev=419.90 00:33:08.507 clat (usec): min=173, max=42485, avg=1131.32, stdev=5867.23 00:33:08.507 lat (usec): min=180, max=42492, avg=1149.92, stdev=5881.03 00:33:08.507 clat percentiles (usec): 00:33:08.507 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 239], 00:33:08.507 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 253], 00:33:08.507 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 297], 95.00th=[ 441], 00:33:08.507 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:33:08.507 | 99.99th=[42730] 00:33:08.507 bw ( KiB/s): min= 312, max= 8192, per=15.08%, avg=3545.50, stdev=3018.99, samples=6 00:33:08.507 iops : min= 78, max= 2048, avg=886.33, stdev=754.76, samples=6 00:33:08.507 lat (usec) : 250=49.60%, 500=47.80%, 750=0.37% 00:33:08.507 lat (msec) : 20=0.04%, 50=2.17% 00:33:08.507 cpu : usr=0.22%, sys=0.82%, ctx=2725, majf=0, minf=2 00:33:08.507 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:08.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.507 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.507 issued rwts: total=2722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.507 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:08.507 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1540822: Thu Dec 5 21:26:16 2024 00:33:08.507 read: IOPS=4194, BW=16.4MiB/s (17.2MB/s)(55.0MiB/3355msec) 00:33:08.507 slat (usec): min=6, max=15478, avg=11.00, stdev=187.54 00:33:08.507 clat (usec): min=171, max=41175, avg=224.47, stdev=844.08 00:33:08.507 lat (usec): min=185, max=49876, avg=235.47, stdev=894.34 00:33:08.507 clat percentiles (usec): 00:33:08.507 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 192], 00:33:08.507 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 200], 00:33:08.507 | 70.00th=[ 206], 80.00th=[ 219], 90.00th=[ 237], 95.00th=[ 255], 00:33:08.507 | 99.00th=[ 318], 99.50th=[ 375], 99.90th=[ 515], 99.95th=[ 3359], 00:33:08.507 | 99.99th=[41157] 00:33:08.507 bw ( KiB/s): min=15384, max=19784, per=77.39%, avg=18196.17, stdev=1696.06, samples=6 00:33:08.507 iops : min= 3846, max= 4946, avg=4549.00, stdev=424.05, samples=6 00:33:08.507 lat (usec) : 250=93.78%, 500=6.06%, 750=0.09% 00:33:08.507 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 50=0.04% 00:33:08.507 cpu : usr=1.25%, sys=4.17%, ctx=14080, majf=0, minf=2 00:33:08.507 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:08.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.507 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.507 issued rwts: total=14074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.507 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:08.507 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1540832: Thu Dec 5 21:26:16 2024 00:33:08.507 read: IOPS=201, BW=803KiB/s (822kB/s)(2348KiB/2925msec) 00:33:08.507 slat (usec): min=6, max=17799, avg=61.19, stdev=898.63 00:33:08.507 clat (usec): min=191, max=41978, avg=4882.49, stdev=12849.43 00:33:08.507 lat (usec): min=200, max=42001, avg=4943.76, stdev=12865.23 00:33:08.507 clat percentiles (usec): 00:33:08.507 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 225], 20.00th=[ 247], 00:33:08.507 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:33:08.507 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[40633], 95.00th=[41157], 00:33:08.507 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:33:08.507 | 99.99th=[42206] 00:33:08.507 bw ( KiB/s): min= 104, max= 2864, per=3.09%, avg=726.40, stdev=1200.65, samples=5 00:33:08.507 iops : min= 26, max= 716, avg=181.60, stdev=300.16, samples=5 00:33:08.507 lat (usec) : 250=22.28%, 500=65.48%, 750=0.17%, 1000=0.17% 00:33:08.507 lat (msec) : 2=0.17%, 10=0.34%, 50=11.22% 00:33:08.507 cpu : usr=0.00%, sys=0.31%, ctx=591, majf=0, minf=1 00:33:08.507 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:08.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.507 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.507 issued rwts: total=588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.507 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:08.507 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1540837: Thu Dec 5 21:26:16 2024 00:33:08.507 read: IOPS=858, BW=3432KiB/s (3515kB/s)(9356KiB/2726msec) 00:33:08.507 slat (nsec): min=6536, max=31300, avg=7706.07, stdev=2476.40 00:33:08.507 clat (usec): min=185, max=42047, avg=1147.41, stdev=6015.89 00:33:08.507 lat (usec): min=192, max=42070, avg=1155.11, stdev=6018.01 00:33:08.507 clat percentiles (usec): 00:33:08.507 | 1.00th=[ 190], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 221], 00:33:08.507 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:33:08.507 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 302], 00:33:08.507 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:33:08.507 | 99.99th=[42206] 00:33:08.507 bw ( KiB/s): min= 96, max=14856, per=15.88%, avg=3734.40, stdev=6391.24, samples=5 00:33:08.507 iops : min= 24, max= 3714, avg=933.60, stdev=1597.81, samples=5 00:33:08.507 lat (usec) : 250=71.28%, 500=26.41%, 750=0.04% 00:33:08.507 lat (msec) : 50=2.22% 00:33:08.507 cpu : usr=0.22%, sys=0.81%, ctx=2340, majf=0, minf=2 00:33:08.507 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:08.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.507 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.507 issued rwts: total=2340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.507 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:08.507 00:33:08.507 Run status group 0 (all jobs): 00:33:08.507 READ: bw=23.0MiB/s (24.1MB/s), 803KiB/s-16.4MiB/s (822kB/s-17.2MB/s), io=77.0MiB (80.8MB), run=2726-3355msec 00:33:08.507 00:33:08.507 Disk stats (read/write): 00:33:08.507 nvme0n1: ios=2720/0, merge=0/0, ticks=3032/0, in_queue=3032, util=94.76% 00:33:08.507 nvme0n2: ios=14109/0, merge=0/0, ticks=3997/0, in_queue=3997, util=98.45% 00:33:08.507 nvme0n3: ios=482/0, merge=0/0, ticks=2823/0, in_queue=2823, util=95.74% 00:33:08.507 nvme0n4: ios=2336/0, merge=0/0, ticks=2554/0, in_queue=2554, util=96.44% 00:33:08.765 21:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:08.765 21:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:33:09.022 21:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:09.022 21:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:33:09.279 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:09.279 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:33:09.279 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:09.279 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:33:09.537 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:33:09.537 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1540667 00:33:09.537 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:33:09.537 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:09.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:09.794 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:09.794 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:33:09.794 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:09.794 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:09.794 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:09.794 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:09.794 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:33:09.794 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:33:09.794 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:33:09.794 nvmf hotplug test: fio failed as expected 00:33:09.794 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:10.053 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:33:10.053 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:33:10.053 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:33:10.053 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:33:10.053 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:33:10.053 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:10.053 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:33:10.053 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:10.053 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:33:10.053 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:10.053 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:10.053 rmmod nvme_tcp 00:33:10.053 rmmod nvme_fabrics 00:33:10.053 rmmod nvme_keyring 00:33:10.053 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:10.053 21:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:33:10.053 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:33:10.053 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1537985 ']' 00:33:10.053 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1537985 00:33:10.053 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1537985 ']' 00:33:10.053 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1537985 00:33:10.053 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:33:10.053 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:10.053 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1537985 00:33:10.053 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:10.053 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:10.053 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1537985' 00:33:10.053 killing process with pid 1537985 00:33:10.053 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1537985 00:33:10.053 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1537985 00:33:10.313 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:10.313 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:10.313 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:10.313 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:33:10.313 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:33:10.313 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:10.313 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:10.313 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:10.313 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:10.313 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.313 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.313 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.225 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:12.225 00:33:12.225 real 0m26.517s 00:33:12.225 user 1m32.699s 00:33:12.225 sys 0m11.457s 00:33:12.225 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:12.225 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:12.225 ************************************ 00:33:12.225 END TEST nvmf_fio_target 00:33:12.225 ************************************ 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:12.485 ************************************ 00:33:12.485 START TEST nvmf_bdevio 00:33:12.485 ************************************ 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:12.485 * Looking for test storage... 00:33:12.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:12.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.485 --rc genhtml_branch_coverage=1 00:33:12.485 --rc genhtml_function_coverage=1 00:33:12.485 --rc genhtml_legend=1 00:33:12.485 --rc geninfo_all_blocks=1 00:33:12.485 --rc geninfo_unexecuted_blocks=1 00:33:12.485 00:33:12.485 ' 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:12.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.485 --rc genhtml_branch_coverage=1 00:33:12.485 --rc genhtml_function_coverage=1 00:33:12.485 --rc genhtml_legend=1 00:33:12.485 --rc geninfo_all_blocks=1 00:33:12.485 --rc geninfo_unexecuted_blocks=1 00:33:12.485 00:33:12.485 ' 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:12.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.485 --rc genhtml_branch_coverage=1 00:33:12.485 --rc genhtml_function_coverage=1 00:33:12.485 --rc genhtml_legend=1 00:33:12.485 --rc geninfo_all_blocks=1 00:33:12.485 --rc geninfo_unexecuted_blocks=1 00:33:12.485 00:33:12.485 ' 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:12.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.485 --rc genhtml_branch_coverage=1 00:33:12.485 --rc genhtml_function_coverage=1 00:33:12.485 --rc genhtml_legend=1 00:33:12.485 --rc geninfo_all_blocks=1 00:33:12.485 --rc geninfo_unexecuted_blocks=1 00:33:12.485 00:33:12.485 ' 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.485 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:33:12.486 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:19.073 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.073 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:19.074 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:19.074 Found net devices under 0000:86:00.0: cvl_0_0 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:19.074 Found net devices under 0000:86:00.1: cvl_0_1 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:19.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:19.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:33:19.074 00:33:19.074 --- 10.0.0.2 ping statistics --- 00:33:19.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.074 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:19.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:19.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:33:19.074 00:33:19.074 --- 10.0.0.1 ping statistics --- 00:33:19.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.074 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1545172 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1545172 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1545172 ']' 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:19.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:19.074 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:19.074 [2024-12-05 21:26:26.538416] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:19.074 [2024-12-05 21:26:26.539310] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:33:19.074 [2024-12-05 21:26:26.539344] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:19.074 [2024-12-05 21:26:26.615503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:19.074 [2024-12-05 21:26:26.656771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:19.074 [2024-12-05 21:26:26.656808] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:19.074 [2024-12-05 21:26:26.656815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:19.074 [2024-12-05 21:26:26.656821] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:19.074 [2024-12-05 21:26:26.656826] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:19.074 [2024-12-05 21:26:26.658349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:19.075 [2024-12-05 21:26:26.658459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:19.075 [2024-12-05 21:26:26.658544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:19.075 [2024-12-05 21:26:26.658545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:19.075 [2024-12-05 21:26:26.725639] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:19.075 [2024-12-05 21:26:26.726392] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:19.075 [2024-12-05 21:26:26.726547] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:19.075 [2024-12-05 21:26:26.726719] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:19.075 [2024-12-05 21:26:26.726780] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:19.075 [2024-12-05 21:26:26.795224] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:19.075 Malloc0 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:19.075 [2024-12-05 21:26:26.875304] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:19.075 { 00:33:19.075 "params": { 00:33:19.075 "name": "Nvme$subsystem", 00:33:19.075 "trtype": "$TEST_TRANSPORT", 00:33:19.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:19.075 "adrfam": "ipv4", 00:33:19.075 "trsvcid": "$NVMF_PORT", 00:33:19.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:19.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:19.075 "hdgst": ${hdgst:-false}, 00:33:19.075 "ddgst": ${ddgst:-false} 00:33:19.075 }, 00:33:19.075 "method": "bdev_nvme_attach_controller" 00:33:19.075 } 00:33:19.075 EOF 00:33:19.075 )") 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:33:19.075 21:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:19.075 "params": { 00:33:19.075 "name": "Nvme1", 00:33:19.075 "trtype": "tcp", 00:33:19.075 "traddr": "10.0.0.2", 00:33:19.075 "adrfam": "ipv4", 00:33:19.075 "trsvcid": "4420", 00:33:19.075 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:19.075 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:19.075 "hdgst": false, 00:33:19.075 "ddgst": false 00:33:19.075 }, 00:33:19.075 "method": "bdev_nvme_attach_controller" 00:33:19.075 }' 00:33:19.075 [2024-12-05 21:26:26.926150] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:33:19.075 [2024-12-05 21:26:26.926197] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545284 ] 00:33:19.075 [2024-12-05 21:26:27.000993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:19.075 [2024-12-05 21:26:27.044218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.075 [2024-12-05 21:26:27.044326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.075 [2024-12-05 21:26:27.044327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:19.332 I/O targets: 00:33:19.332 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:33:19.332 00:33:19.332 00:33:19.332 CUnit - A unit testing framework for C - Version 2.1-3 00:33:19.332 http://cunit.sourceforge.net/ 00:33:19.332 00:33:19.332 00:33:19.332 Suite: bdevio tests on: Nvme1n1 00:33:19.332 Test: blockdev write read block ...passed 00:33:19.332 Test: blockdev write zeroes read block ...passed 00:33:19.332 Test: blockdev write zeroes read no split ...passed 00:33:19.332 Test: blockdev write zeroes read split ...passed 00:33:19.332 Test: blockdev write zeroes read split partial ...passed 00:33:19.332 Test: blockdev reset ...[2024-12-05 21:26:27.386886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:19.332 [2024-12-05 21:26:27.386947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2df30 (9): Bad file descriptor 00:33:19.332 [2024-12-05 21:26:27.432199] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:33:19.332 passed 00:33:19.590 Test: blockdev write read 8 blocks ...passed 00:33:19.591 Test: blockdev write read size > 128k ...passed 00:33:19.591 Test: blockdev write read invalid size ...passed 00:33:19.591 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:19.591 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:19.591 Test: blockdev write read max offset ...passed 00:33:19.591 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:19.591 Test: blockdev writev readv 8 blocks ...passed 00:33:19.591 Test: blockdev writev readv 30 x 1block ...passed 00:33:19.591 Test: blockdev writev readv block ...passed 00:33:19.591 Test: blockdev writev readv size > 128k ...passed 00:33:19.591 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:19.591 Test: blockdev comparev and writev ...[2024-12-05 21:26:27.686411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.591 [2024-12-05 21:26:27.686438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.591 [2024-12-05 21:26:27.686452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.591 [2024-12-05 21:26:27.686460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:19.591 [2024-12-05 21:26:27.686752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.591 [2024-12-05 21:26:27.686762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:19.591 [2024-12-05 21:26:27.686774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.591 [2024-12-05 21:26:27.686780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:19.591 [2024-12-05 21:26:27.687064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.591 [2024-12-05 21:26:27.687074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:19.591 [2024-12-05 21:26:27.687085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.591 [2024-12-05 21:26:27.687093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:19.591 [2024-12-05 21:26:27.687384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.591 [2024-12-05 21:26:27.687396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:19.591 [2024-12-05 21:26:27.687408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.591 [2024-12-05 21:26:27.687416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:19.849 passed 00:33:19.849 Test: blockdev nvme passthru rw ...passed 00:33:19.849 Test: blockdev nvme passthru vendor specific ...[2024-12-05 21:26:27.769707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:19.849 [2024-12-05 21:26:27.769726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:19.849 [2024-12-05 21:26:27.769832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:19.849 [2024-12-05 21:26:27.769843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:19.849 [2024-12-05 21:26:27.769952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:19.849 [2024-12-05 21:26:27.769961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:19.849 [2024-12-05 21:26:27.770068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:19.849 [2024-12-05 21:26:27.770078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:19.849 passed 00:33:19.849 Test: blockdev nvme admin passthru ...passed 00:33:19.849 Test: blockdev copy ...passed 00:33:19.849 00:33:19.849 Run Summary: Type Total Ran Passed Failed Inactive 00:33:19.849 suites 1 1 n/a 0 0 00:33:19.849 tests 23 23 23 0 0 00:33:19.849 asserts 152 152 152 0 n/a 00:33:19.849 00:33:19.849 Elapsed time = 1.190 seconds 00:33:20.108 21:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:20.108 21:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.108 21:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:20.108 21:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.108 21:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:33:20.108 21:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:33:20.108 21:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:20.108 21:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:33:20.108 21:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:20.108 21:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:33:20.108 21:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.108 21:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:20.108 rmmod nvme_tcp 00:33:20.108 rmmod nvme_fabrics 00:33:20.108 rmmod nvme_keyring 00:33:20.108 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.108 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:33:20.108 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:33:20.108 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1545172 ']' 00:33:20.108 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1545172 00:33:20.108 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1545172 ']' 00:33:20.108 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1545172 00:33:20.108 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:33:20.108 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.108 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1545172 00:33:20.108 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:33:20.108 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:33:20.108 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1545172' 00:33:20.108 killing process with pid 1545172 00:33:20.108 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1545172 00:33:20.108 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1545172 00:33:20.367 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:20.367 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:20.367 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:20.367 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:33:20.367 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:33:20.367 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:20.367 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:33:20.367 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:20.367 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:20.367 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.367 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.367 21:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.270 21:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.270 00:33:22.270 real 0m9.985s 00:33:22.270 user 0m8.877s 00:33:22.270 sys 0m5.216s 00:33:22.270 21:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.270 21:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:22.270 ************************************ 00:33:22.270 END TEST nvmf_bdevio 00:33:22.270 ************************************ 00:33:22.529 21:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:22.529 00:33:22.529 real 4m34.725s 00:33:22.529 user 9m8.773s 00:33:22.529 sys 1m51.259s 00:33:22.529 21:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.529 21:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:22.529 ************************************ 00:33:22.529 END TEST nvmf_target_core_interrupt_mode 00:33:22.529 ************************************ 00:33:22.529 21:26:30 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:22.529 21:26:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:22.529 21:26:30 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:22.529 21:26:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:22.529 ************************************ 00:33:22.529 START TEST nvmf_interrupt 00:33:22.529 ************************************ 00:33:22.529 21:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:22.529 * Looking for test storage... 00:33:22.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:22.529 21:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:22.529 21:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:33:22.529 21:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:22.529 21:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:22.529 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:22.529 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:22.529 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:22.529 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:33:22.529 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:33:22.529 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:33:22.529 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:33:22.529 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:33:22.529 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:33:22.529 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:33:22.529 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:22.529 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:33:22.530 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:33:22.530 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:22.530 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:22.530 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:33:22.530 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:33:22.530 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:22.530 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:33:22.530 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:33:22.530 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:33:22.530 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:33:22.530 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:22.530 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:33:22.530 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:22.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.789 --rc genhtml_branch_coverage=1 00:33:22.789 --rc genhtml_function_coverage=1 00:33:22.789 --rc genhtml_legend=1 00:33:22.789 --rc geninfo_all_blocks=1 00:33:22.789 --rc geninfo_unexecuted_blocks=1 00:33:22.789 00:33:22.789 ' 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:22.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.789 --rc genhtml_branch_coverage=1 00:33:22.789 --rc genhtml_function_coverage=1 00:33:22.789 --rc genhtml_legend=1 00:33:22.789 --rc geninfo_all_blocks=1 00:33:22.789 --rc geninfo_unexecuted_blocks=1 00:33:22.789 00:33:22.789 ' 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:22.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.789 --rc genhtml_branch_coverage=1 00:33:22.789 --rc genhtml_function_coverage=1 00:33:22.789 --rc genhtml_legend=1 00:33:22.789 --rc geninfo_all_blocks=1 00:33:22.789 --rc geninfo_unexecuted_blocks=1 00:33:22.789 00:33:22.789 ' 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:22.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.789 --rc genhtml_branch_coverage=1 00:33:22.789 --rc genhtml_function_coverage=1 00:33:22.789 --rc genhtml_legend=1 00:33:22.789 --rc geninfo_all_blocks=1 00:33:22.789 --rc geninfo_unexecuted_blocks=1 00:33:22.789 00:33:22.789 ' 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.789 21:26:30 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:33:22.790 21:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:29.360 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:29.360 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:29.360 Found net devices under 0000:86:00.0: cvl_0_0 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:29.360 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:29.361 Found net devices under 0000:86:00.1: cvl_0_1 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:29.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:29.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:33:29.361 00:33:29.361 --- 10.0.0.2 ping statistics --- 00:33:29.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:29.361 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:29.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:29.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:33:29.361 00:33:29.361 --- 10.0.0.1 ping statistics --- 00:33:29.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:29.361 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1548874 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1548874 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1548874 ']' 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:29.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:29.361 21:26:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:29.361 [2024-12-05 21:26:36.614881] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:29.361 [2024-12-05 21:26:36.615828] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:33:29.361 [2024-12-05 21:26:36.615866] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:29.361 [2024-12-05 21:26:36.694811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:29.361 [2024-12-05 21:26:36.736052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:29.361 [2024-12-05 21:26:36.736088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:29.361 [2024-12-05 21:26:36.736094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:29.361 [2024-12-05 21:26:36.736100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:29.361 [2024-12-05 21:26:36.736105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:29.361 [2024-12-05 21:26:36.737328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.361 [2024-12-05 21:26:36.737329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:29.361 [2024-12-05 21:26:36.806736] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:29.361 [2024-12-05 21:26:36.807261] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:29.361 [2024-12-05 21:26:36.807430] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:29.361 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:29.361 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:33:29.362 21:26:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:29.362 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:29.362 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:29.652 5000+0 records in 00:33:29.652 5000+0 records out 00:33:29.652 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0167129 s, 613 MB/s 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:29.652 AIO0 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:29.652 [2024-12-05 21:26:37.542154] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:29.652 [2024-12-05 21:26:37.582478] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1548874 0 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1548874 0 idle 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1548874 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1548874 -w 256 00:33:29.652 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1548874 root 20 0 128.2g 46080 33792 S 6.2 0.0 0:00.27 reactor_0' 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1548874 root 20 0 128.2g 46080 33792 S 6.2 0.0 0:00.27 reactor_0 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1548874 1 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1548874 1 idle 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1548874 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1548874 -w 256 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1548922 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1548922 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1549114 00:33:29.942 21:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:29.943 21:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:29.943 21:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:29.943 21:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1548874 0 00:33:29.943 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1548874 0 busy 00:33:29.943 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1548874 00:33:29.943 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:29.943 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:29.943 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:29.943 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:29.943 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:29.943 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:29.943 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:29.943 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:29.943 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1548874 -w 256 00:33:29.943 21:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:30.200 21:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1548874 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:00.27 reactor_0' 00:33:30.200 21:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1548874 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:00.27 reactor_0 00:33:30.200 21:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:30.200 21:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:30.200 21:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:30.200 21:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:30.200 21:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:30.201 21:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:30.201 21:26:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:33:31.132 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:33:31.132 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:31.132 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1548874 -w 256 00:33:31.132 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1548874 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:02.56 reactor_0' 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1548874 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:02.56 reactor_0 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1548874 1 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1548874 1 busy 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1548874 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1548874 -w 256 00:33:31.390 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:31.648 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1548922 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:01.33 reactor_1' 00:33:31.648 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1548922 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:01.33 reactor_1 00:33:31.648 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:31.648 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:31.648 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:31.648 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:31.648 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:31.648 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:31.648 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:31.648 21:26:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:31.648 21:26:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1549114 00:33:41.600 Initializing NVMe Controllers 00:33:41.600 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:41.600 Controller IO queue size 256, less than required. 00:33:41.600 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:41.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:41.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:41.600 Initialization complete. Launching workers. 00:33:41.600 ======================================================== 00:33:41.600 Latency(us) 00:33:41.600 Device Information : IOPS MiB/s Average min max 00:33:41.600 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16459.18 64.29 15562.75 2935.57 55969.06 00:33:41.600 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16678.97 65.15 15353.33 7786.67 56309.15 00:33:41.600 ======================================================== 00:33:41.600 Total : 33138.15 129.45 15457.35 2935.57 56309.15 00:33:41.600 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1548874 0 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1548874 0 idle 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1548874 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1548874 -w 256 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1548874 root 20 0 128.2g 46848 33792 S 6.7 0.0 0:20.26 reactor_0' 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1548874 root 20 0 128.2g 46848 33792 S 6.7 0.0 0:20.26 reactor_0 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1548874 1 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1548874 1 idle 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1548874 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1548874 -w 256 00:33:41.600 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:41.601 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1548922 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:33:41.601 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1548922 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:33:41.601 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:41.601 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:41.601 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:41.601 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:41.601 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:41.601 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:41.601 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:41.601 21:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:41.601 21:26:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:41.601 21:26:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:33:41.601 21:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:33:41.601 21:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:41.601 21:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:41.601 21:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:33:42.981 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:42.981 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:42.981 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1548874 0 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1548874 0 idle 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1548874 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1548874 -w 256 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1548874 root 20 0 128.2g 72960 33792 S 6.2 0.0 0:20.53 reactor_0' 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1548874 root 20 0 128.2g 72960 33792 S 6.2 0.0 0:20.53 reactor_0 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1548874 1 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1548874 1 idle 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1548874 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1548874 -w 256 00:33:43.240 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1548922 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.11 reactor_1' 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1548922 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.11 reactor_1 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:43.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:43.500 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:43.500 rmmod nvme_tcp 00:33:43.758 rmmod nvme_fabrics 00:33:43.758 rmmod nvme_keyring 00:33:43.758 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:43.758 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:43.758 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:43.758 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1548874 ']' 00:33:43.758 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1548874 00:33:43.758 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1548874 ']' 00:33:43.758 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1548874 00:33:43.759 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:33:43.759 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:43.759 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1548874 00:33:43.759 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:43.759 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:43.759 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1548874' 00:33:43.759 killing process with pid 1548874 00:33:43.759 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1548874 00:33:43.759 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1548874 00:33:44.017 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:44.017 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:44.017 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:44.017 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:44.017 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:33:44.017 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:44.017 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:33:44.018 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:44.018 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:44.018 21:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.018 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:44.018 21:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.944 21:26:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:45.944 00:33:45.944 real 0m23.508s 00:33:45.944 user 0m39.951s 00:33:45.944 sys 0m8.503s 00:33:45.944 21:26:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:45.944 21:26:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:45.944 ************************************ 00:33:45.944 END TEST nvmf_interrupt 00:33:45.944 ************************************ 00:33:45.944 00:33:45.944 real 27m29.672s 00:33:45.944 user 56m41.793s 00:33:45.944 sys 9m19.383s 00:33:45.944 21:26:54 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:45.944 21:26:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:45.944 ************************************ 00:33:45.944 END TEST nvmf_tcp 00:33:45.944 ************************************ 00:33:45.944 21:26:54 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:33:45.944 21:26:54 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:45.944 21:26:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:45.944 21:26:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:45.944 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:33:46.205 ************************************ 00:33:46.205 START TEST spdkcli_nvmf_tcp 00:33:46.205 ************************************ 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:46.205 * Looking for test storage... 00:33:46.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:46.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.205 --rc genhtml_branch_coverage=1 00:33:46.205 --rc genhtml_function_coverage=1 00:33:46.205 --rc genhtml_legend=1 00:33:46.205 --rc geninfo_all_blocks=1 00:33:46.205 --rc geninfo_unexecuted_blocks=1 00:33:46.205 00:33:46.205 ' 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:46.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.205 --rc genhtml_branch_coverage=1 00:33:46.205 --rc genhtml_function_coverage=1 00:33:46.205 --rc genhtml_legend=1 00:33:46.205 --rc geninfo_all_blocks=1 00:33:46.205 --rc geninfo_unexecuted_blocks=1 00:33:46.205 00:33:46.205 ' 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:46.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.205 --rc genhtml_branch_coverage=1 00:33:46.205 --rc genhtml_function_coverage=1 00:33:46.205 --rc genhtml_legend=1 00:33:46.205 --rc geninfo_all_blocks=1 00:33:46.205 --rc geninfo_unexecuted_blocks=1 00:33:46.205 00:33:46.205 ' 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:46.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.205 --rc genhtml_branch_coverage=1 00:33:46.205 --rc genhtml_function_coverage=1 00:33:46.205 --rc genhtml_legend=1 00:33:46.205 --rc geninfo_all_blocks=1 00:33:46.205 --rc geninfo_unexecuted_blocks=1 00:33:46.205 00:33:46.205 ' 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.205 21:26:54 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:46.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1551930 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1551930 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1551930 ']' 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:46.206 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:46.465 [2024-12-05 21:26:54.340729] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:33:46.465 [2024-12-05 21:26:54.340776] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1551930 ] 00:33:46.465 [2024-12-05 21:26:54.413139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:46.465 [2024-12-05 21:26:54.456396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.465 [2024-12-05 21:26:54.456399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.465 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:46.465 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:33:46.465 21:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:46.465 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:46.465 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:46.724 21:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:46.724 21:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:46.724 21:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:46.724 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:46.724 21:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:46.724 21:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:46.724 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:46.724 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:46.724 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:46.724 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:46.724 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:46.724 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:46.724 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:46.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:46.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:46.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:46.724 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:46.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:46.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:46.724 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:46.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:46.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:46.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:46.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:46.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:46.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:46.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:46.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:46.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:46.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:46.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:46.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:46.724 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:46.724 ' 00:33:49.250 [2024-12-05 21:26:57.273910] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:50.623 [2024-12-05 21:26:58.614409] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:53.149 [2024-12-05 21:27:01.094129] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:55.681 [2024-12-05 21:27:03.256949] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:57.058 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:57.058 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:57.058 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:57.058 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:57.058 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:57.058 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:57.058 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:57.058 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:57.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:57.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:57.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:57.058 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:57.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:57.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:57.058 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:57.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:57.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:57.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:57.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:57.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:57.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:57.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:57.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:57.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:57.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:57.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:57.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:57.058 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:57.058 21:27:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:57.058 21:27:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:57.058 21:27:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:57.058 21:27:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:57.058 21:27:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:57.058 21:27:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:57.058 21:27:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:57.058 21:27:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:57.623 21:27:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:57.623 21:27:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:57.623 21:27:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:57.623 21:27:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:57.623 21:27:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:57.623 21:27:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:57.623 21:27:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:57.623 21:27:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:57.623 21:27:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:57.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:57.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:57.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:57.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:57.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:57.623 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:57.623 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:57.623 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:57.623 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:57.623 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:57.623 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:57.623 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:57.623 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:57.623 ' 00:34:04.196 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:04.196 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:04.196 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:04.196 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:04.196 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:04.196 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:04.196 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:04.196 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:04.196 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:04.196 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:04.196 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:04.196 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:04.196 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:04.196 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1551930 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1551930 ']' 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1551930 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1551930 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1551930' 00:34:04.196 killing process with pid 1551930 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1551930 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1551930 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1551930 ']' 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1551930 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1551930 ']' 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1551930 00:34:04.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1551930) - No such process 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1551930 is not found' 00:34:04.196 Process with pid 1551930 is not found 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:04.196 00:34:04.196 real 0m17.327s 00:34:04.196 user 0m38.147s 00:34:04.196 sys 0m0.783s 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.196 21:27:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:04.196 ************************************ 00:34:04.196 END TEST spdkcli_nvmf_tcp 00:34:04.196 ************************************ 00:34:04.196 21:27:11 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:04.196 21:27:11 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:04.196 21:27:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:04.196 21:27:11 -- common/autotest_common.sh@10 -- # set +x 00:34:04.196 ************************************ 00:34:04.196 START TEST nvmf_identify_passthru 00:34:04.196 ************************************ 00:34:04.196 21:27:11 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:04.196 * Looking for test storage... 00:34:04.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:04.196 21:27:11 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:04.196 21:27:11 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:34:04.196 21:27:11 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:04.196 21:27:11 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:04.196 21:27:11 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:04.196 21:27:11 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:04.196 21:27:11 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:04.196 21:27:11 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:34:04.196 21:27:11 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:34:04.196 21:27:11 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:34:04.196 21:27:11 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:34:04.196 21:27:11 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:34:04.196 21:27:11 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:34:04.196 21:27:11 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:34:04.196 21:27:11 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:04.196 21:27:11 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:34:04.197 21:27:11 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:04.197 21:27:11 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:04.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.197 --rc genhtml_branch_coverage=1 00:34:04.197 --rc genhtml_function_coverage=1 00:34:04.197 --rc genhtml_legend=1 00:34:04.197 --rc geninfo_all_blocks=1 00:34:04.197 --rc geninfo_unexecuted_blocks=1 00:34:04.197 00:34:04.197 ' 00:34:04.197 21:27:11 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:04.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.197 --rc genhtml_branch_coverage=1 00:34:04.197 --rc genhtml_function_coverage=1 00:34:04.197 --rc genhtml_legend=1 00:34:04.197 --rc geninfo_all_blocks=1 00:34:04.197 --rc geninfo_unexecuted_blocks=1 00:34:04.197 00:34:04.197 ' 00:34:04.197 21:27:11 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:04.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.197 --rc genhtml_branch_coverage=1 00:34:04.197 --rc genhtml_function_coverage=1 00:34:04.197 --rc genhtml_legend=1 00:34:04.197 --rc geninfo_all_blocks=1 00:34:04.197 --rc geninfo_unexecuted_blocks=1 00:34:04.197 00:34:04.197 ' 00:34:04.197 21:27:11 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:04.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.197 --rc genhtml_branch_coverage=1 00:34:04.197 --rc genhtml_function_coverage=1 00:34:04.197 --rc genhtml_legend=1 00:34:04.197 --rc geninfo_all_blocks=1 00:34:04.197 --rc geninfo_unexecuted_blocks=1 00:34:04.197 00:34:04.197 ' 00:34:04.197 21:27:11 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:04.197 21:27:11 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.197 21:27:11 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.197 21:27:11 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.197 21:27:11 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:04.197 21:27:11 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:04.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:04.197 21:27:11 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:04.197 21:27:11 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:04.197 21:27:11 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.197 21:27:11 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.197 21:27:11 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.197 21:27:11 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:04.197 21:27:11 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.197 21:27:11 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.197 21:27:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:04.197 21:27:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:04.197 21:27:11 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:34:04.197 21:27:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:09.495 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.495 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:09.496 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:09.496 Found net devices under 0000:86:00.0: cvl_0_0 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:09.496 Found net devices under 0000:86:00.1: cvl_0_1 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:09.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:09.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:34:09.496 00:34:09.496 --- 10.0.0.2 ping statistics --- 00:34:09.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.496 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:09.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:09.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:34:09.496 00:34:09.496 --- 10.0.0.1 ping statistics --- 00:34:09.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.496 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:09.496 21:27:17 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:09.496 21:27:17 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:09.496 21:27:17 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:09.496 21:27:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:09.756 21:27:17 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:09.756 21:27:17 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:09.756 21:27:17 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:09.756 21:27:17 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:34:09.756 21:27:17 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:34:09.756 21:27:17 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:09.756 21:27:17 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:34:09.756 21:27:17 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:09.756 21:27:17 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:09.756 21:27:17 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:09.756 21:27:17 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:34:09.756 21:27:17 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:34:09.756 21:27:17 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:34:09.756 21:27:17 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:34:09.756 21:27:17 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:34:09.756 21:27:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:09.756 21:27:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:09.756 21:27:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:15.026 21:27:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:34:15.026 21:27:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:15.026 21:27:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:15.026 21:27:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:19.211 21:27:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:19.211 21:27:27 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:19.211 21:27:27 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:19.211 21:27:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.211 21:27:27 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:19.211 21:27:27 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:19.211 21:27:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.211 21:27:27 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1559782 00:34:19.211 21:27:27 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:19.211 21:27:27 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:19.211 21:27:27 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1559782 00:34:19.211 21:27:27 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1559782 ']' 00:34:19.211 21:27:27 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.211 21:27:27 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:19.211 21:27:27 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.211 21:27:27 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:19.211 21:27:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.469 [2024-12-05 21:27:27.326863] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:34:19.469 [2024-12-05 21:27:27.326915] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:19.469 [2024-12-05 21:27:27.407598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:19.469 [2024-12-05 21:27:27.450125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.469 [2024-12-05 21:27:27.450160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.469 [2024-12-05 21:27:27.450167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.469 [2024-12-05 21:27:27.450175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.469 [2024-12-05 21:27:27.450181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.469 [2024-12-05 21:27:27.451603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:19.469 [2024-12-05 21:27:27.451711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:19.469 [2024-12-05 21:27:27.451743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.469 [2024-12-05 21:27:27.451744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:20.399 21:27:28 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:20.399 21:27:28 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:34:20.399 21:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:20.399 21:27:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.399 21:27:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:20.399 INFO: Log level set to 20 00:34:20.399 INFO: Requests: 00:34:20.399 { 00:34:20.399 "jsonrpc": "2.0", 00:34:20.399 "method": "nvmf_set_config", 00:34:20.399 "id": 1, 00:34:20.399 "params": { 00:34:20.399 "admin_cmd_passthru": { 00:34:20.399 "identify_ctrlr": true 00:34:20.399 } 00:34:20.399 } 00:34:20.399 } 00:34:20.399 00:34:20.399 INFO: response: 00:34:20.399 { 00:34:20.399 "jsonrpc": "2.0", 00:34:20.399 "id": 1, 00:34:20.399 "result": true 00:34:20.399 } 00:34:20.399 00:34:20.399 21:27:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.399 21:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:20.399 21:27:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.399 21:27:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:20.399 INFO: Setting log level to 20 00:34:20.399 INFO: Setting log level to 20 00:34:20.399 INFO: Log level set to 20 00:34:20.399 INFO: Log level set to 20 00:34:20.399 INFO: Requests: 00:34:20.399 { 00:34:20.399 "jsonrpc": "2.0", 00:34:20.399 "method": "framework_start_init", 00:34:20.399 "id": 1 00:34:20.399 } 00:34:20.399 00:34:20.399 INFO: Requests: 00:34:20.399 { 00:34:20.399 "jsonrpc": "2.0", 00:34:20.399 "method": "framework_start_init", 00:34:20.399 "id": 1 00:34:20.399 } 00:34:20.399 00:34:20.399 [2024-12-05 21:27:28.233751] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:20.399 INFO: response: 00:34:20.399 { 00:34:20.399 "jsonrpc": "2.0", 00:34:20.399 "id": 1, 00:34:20.399 "result": true 00:34:20.399 } 00:34:20.399 00:34:20.399 INFO: response: 00:34:20.399 { 00:34:20.399 "jsonrpc": "2.0", 00:34:20.399 "id": 1, 00:34:20.399 "result": true 00:34:20.399 } 00:34:20.399 00:34:20.399 21:27:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.399 21:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:20.399 21:27:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.399 21:27:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:20.399 INFO: Setting log level to 40 00:34:20.399 INFO: Setting log level to 40 00:34:20.399 INFO: Setting log level to 40 00:34:20.399 [2024-12-05 21:27:28.247068] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:20.399 21:27:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.399 21:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:20.399 21:27:28 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:20.399 21:27:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:20.399 21:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:34:20.399 21:27:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.399 21:27:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:23.670 Nvme0n1 00:34:23.670 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.670 21:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:23.670 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.670 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:23.670 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.670 21:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:23.670 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.670 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:23.670 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.670 21:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:23.670 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.670 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:23.670 [2024-12-05 21:27:31.158683] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.670 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.670 21:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:23.670 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.670 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:23.670 [ 00:34:23.670 { 00:34:23.670 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:23.670 "subtype": "Discovery", 00:34:23.670 "listen_addresses": [], 00:34:23.670 "allow_any_host": true, 00:34:23.670 "hosts": [] 00:34:23.670 }, 00:34:23.670 { 00:34:23.670 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:23.670 "subtype": "NVMe", 00:34:23.670 "listen_addresses": [ 00:34:23.670 { 00:34:23.670 "trtype": "TCP", 00:34:23.670 "adrfam": "IPv4", 00:34:23.670 "traddr": "10.0.0.2", 00:34:23.670 "trsvcid": "4420" 00:34:23.670 } 00:34:23.670 ], 00:34:23.670 "allow_any_host": true, 00:34:23.670 "hosts": [], 00:34:23.670 "serial_number": "SPDK00000000000001", 00:34:23.670 "model_number": "SPDK bdev Controller", 00:34:23.670 "max_namespaces": 1, 00:34:23.670 "min_cntlid": 1, 00:34:23.670 "max_cntlid": 65519, 00:34:23.670 "namespaces": [ 00:34:23.670 { 00:34:23.670 "nsid": 1, 00:34:23.670 "bdev_name": "Nvme0n1", 00:34:23.670 "name": "Nvme0n1", 00:34:23.670 "nguid": "ED3899F9F9FF4F5B8BC32F26AC8B1D3F", 00:34:23.670 "uuid": "ed3899f9-f9ff-4f5b-8bc3-2f26ac8b1d3f" 00:34:23.670 } 00:34:23.670 ] 00:34:23.670 } 00:34:23.670 ] 00:34:23.670 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.670 21:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:23.670 21:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:23.670 21:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:23.670 21:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:34:23.670 21:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:23.670 21:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:23.670 21:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:23.670 21:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:23.670 21:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:34:23.670 21:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:23.670 21:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:23.670 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.670 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:23.670 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.670 21:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:23.670 21:27:31 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:23.670 21:27:31 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:23.670 21:27:31 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:23.670 21:27:31 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:23.670 21:27:31 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:23.670 21:27:31 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:23.670 21:27:31 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:23.670 rmmod nvme_tcp 00:34:23.670 rmmod nvme_fabrics 00:34:23.670 rmmod nvme_keyring 00:34:23.670 21:27:31 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:23.670 21:27:31 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:23.670 21:27:31 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:23.671 21:27:31 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1559782 ']' 00:34:23.671 21:27:31 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1559782 00:34:23.671 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1559782 ']' 00:34:23.671 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1559782 00:34:23.671 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:34:23.671 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:23.671 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1559782 00:34:23.671 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:23.671 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:23.671 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1559782' 00:34:23.671 killing process with pid 1559782 00:34:23.671 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1559782 00:34:23.671 21:27:31 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1559782 00:34:25.564 21:27:33 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:25.564 21:27:33 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:25.564 21:27:33 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:25.564 21:27:33 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:25.564 21:27:33 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:34:25.564 21:27:33 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:25.564 21:27:33 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:34:25.564 21:27:33 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:25.564 21:27:33 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:25.564 21:27:33 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:25.564 21:27:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:25.564 21:27:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.102 21:27:35 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:28.102 00:34:28.102 real 0m24.145s 00:34:28.102 user 0m32.306s 00:34:28.102 sys 0m6.325s 00:34:28.102 21:27:35 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:28.102 21:27:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:28.102 ************************************ 00:34:28.102 END TEST nvmf_identify_passthru 00:34:28.102 ************************************ 00:34:28.102 21:27:35 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:28.102 21:27:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:28.102 21:27:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:28.102 21:27:35 -- common/autotest_common.sh@10 -- # set +x 00:34:28.102 ************************************ 00:34:28.102 START TEST nvmf_dif 00:34:28.102 ************************************ 00:34:28.102 21:27:35 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:28.102 * Looking for test storage... 00:34:28.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:28.102 21:27:35 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:28.102 21:27:35 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:34:28.102 21:27:35 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:28.102 21:27:35 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:28.102 21:27:35 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:28.102 21:27:35 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:28.102 21:27:35 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:28.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.102 --rc genhtml_branch_coverage=1 00:34:28.102 --rc genhtml_function_coverage=1 00:34:28.102 --rc genhtml_legend=1 00:34:28.102 --rc geninfo_all_blocks=1 00:34:28.102 --rc geninfo_unexecuted_blocks=1 00:34:28.102 00:34:28.102 ' 00:34:28.102 21:27:35 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:28.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.102 --rc genhtml_branch_coverage=1 00:34:28.102 --rc genhtml_function_coverage=1 00:34:28.102 --rc genhtml_legend=1 00:34:28.102 --rc geninfo_all_blocks=1 00:34:28.102 --rc geninfo_unexecuted_blocks=1 00:34:28.102 00:34:28.102 ' 00:34:28.102 21:27:35 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:28.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.102 --rc genhtml_branch_coverage=1 00:34:28.102 --rc genhtml_function_coverage=1 00:34:28.102 --rc genhtml_legend=1 00:34:28.102 --rc geninfo_all_blocks=1 00:34:28.102 --rc geninfo_unexecuted_blocks=1 00:34:28.102 00:34:28.102 ' 00:34:28.102 21:27:35 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:28.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.102 --rc genhtml_branch_coverage=1 00:34:28.102 --rc genhtml_function_coverage=1 00:34:28.102 --rc genhtml_legend=1 00:34:28.102 --rc geninfo_all_blocks=1 00:34:28.102 --rc geninfo_unexecuted_blocks=1 00:34:28.102 00:34:28.102 ' 00:34:28.102 21:27:35 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:28.102 21:27:35 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:28.102 21:27:35 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:28.102 21:27:35 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:28.102 21:27:35 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:28.102 21:27:35 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:28.102 21:27:35 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:28.102 21:27:35 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:28.102 21:27:35 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:28.102 21:27:35 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:28.102 21:27:35 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:28.102 21:27:35 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:28.102 21:27:35 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:28.102 21:27:35 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:28.102 21:27:35 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:28.103 21:27:35 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:28.103 21:27:35 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:28.103 21:27:35 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:28.103 21:27:35 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:28.103 21:27:35 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.103 21:27:35 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.103 21:27:35 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.103 21:27:35 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:28.103 21:27:35 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:28.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:28.103 21:27:35 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:28.103 21:27:35 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:28.103 21:27:35 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:28.103 21:27:35 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:28.103 21:27:35 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.103 21:27:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:28.103 21:27:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:28.103 21:27:35 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:28.103 21:27:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:34.673 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:34.673 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:34.673 Found net devices under 0000:86:00.0: cvl_0_0 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:34.673 Found net devices under 0000:86:00.1: cvl_0_1 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:34.673 21:27:41 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:34.674 21:27:41 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:34.674 21:27:41 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:34.674 21:27:41 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:34.674 21:27:41 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:34.674 21:27:41 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:34.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:34.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:34:34.674 00:34:34.674 --- 10.0.0.2 ping statistics --- 00:34:34.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.674 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:34:34.674 21:27:41 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:34.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:34.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:34:34.674 00:34:34.674 --- 10.0.0.1 ping statistics --- 00:34:34.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.674 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:34:34.674 21:27:41 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:34.674 21:27:41 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:34:34.674 21:27:41 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:34.674 21:27:41 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:36.577 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:36.577 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:36.577 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:36.577 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:36.577 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:36.577 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:36.577 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:36.577 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:36.577 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:36.577 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:36.577 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:36.577 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:36.577 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:36.577 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:36.577 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:36.577 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:36.577 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:36.577 21:27:44 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:36.577 21:27:44 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:36.577 21:27:44 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:36.577 21:27:44 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:36.577 21:27:44 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:36.577 21:27:44 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:36.836 21:27:44 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:36.836 21:27:44 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:36.836 21:27:44 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:36.836 21:27:44 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:36.836 21:27:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:36.836 21:27:44 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1565485 00:34:36.836 21:27:44 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1565485 00:34:36.836 21:27:44 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:36.836 21:27:44 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1565485 ']' 00:34:36.836 21:27:44 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.836 21:27:44 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:36.836 21:27:44 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.836 21:27:44 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:36.836 21:27:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:36.836 [2024-12-05 21:27:44.742843] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:34:36.836 [2024-12-05 21:27:44.742884] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:36.836 [2024-12-05 21:27:44.821878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:36.836 [2024-12-05 21:27:44.862404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:36.836 [2024-12-05 21:27:44.862439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:36.836 [2024-12-05 21:27:44.862446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:36.836 [2024-12-05 21:27:44.862452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:36.836 [2024-12-05 21:27:44.862460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:36.836 [2024-12-05 21:27:44.863011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:37.095 21:27:44 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:37.095 21:27:44 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:34:37.095 21:27:44 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:37.095 21:27:44 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:37.095 21:27:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:37.095 21:27:44 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:37.095 21:27:44 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:37.095 21:27:44 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:37.096 21:27:44 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.096 21:27:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:37.096 [2024-12-05 21:27:44.998274] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:37.096 21:27:45 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.096 21:27:45 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:37.096 21:27:45 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:37.096 21:27:45 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:37.096 21:27:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:37.096 ************************************ 00:34:37.096 START TEST fio_dif_1_default 00:34:37.096 ************************************ 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:37.096 bdev_null0 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:37.096 [2024-12-05 21:27:45.066556] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:37.096 { 00:34:37.096 "params": { 00:34:37.096 "name": "Nvme$subsystem", 00:34:37.096 "trtype": "$TEST_TRANSPORT", 00:34:37.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:37.096 "adrfam": "ipv4", 00:34:37.096 "trsvcid": "$NVMF_PORT", 00:34:37.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:37.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:37.096 "hdgst": ${hdgst:-false}, 00:34:37.096 "ddgst": ${ddgst:-false} 00:34:37.096 }, 00:34:37.096 "method": "bdev_nvme_attach_controller" 00:34:37.096 } 00:34:37.096 EOF 00:34:37.096 )") 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:37.096 "params": { 00:34:37.096 "name": "Nvme0", 00:34:37.096 "trtype": "tcp", 00:34:37.096 "traddr": "10.0.0.2", 00:34:37.096 "adrfam": "ipv4", 00:34:37.096 "trsvcid": "4420", 00:34:37.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:37.096 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:37.096 "hdgst": false, 00:34:37.096 "ddgst": false 00:34:37.096 }, 00:34:37.096 "method": "bdev_nvme_attach_controller" 00:34:37.096 }' 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:37.096 21:27:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.353 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:37.353 fio-3.35 00:34:37.353 Starting 1 thread 00:34:49.551 00:34:49.551 filename0: (groupid=0, jobs=1): err= 0: pid=1565861: Thu Dec 5 21:27:55 2024 00:34:49.551 read: IOPS=191, BW=766KiB/s (784kB/s)(7664KiB/10007msec) 00:34:49.551 slat (nsec): min=5971, max=34767, avg=6299.20, stdev=962.84 00:34:49.551 clat (usec): min=370, max=44372, avg=20872.65, stdev=20435.90 00:34:49.551 lat (usec): min=376, max=44407, avg=20878.95, stdev=20435.84 00:34:49.551 clat percentiles (usec): 00:34:49.551 | 1.00th=[ 379], 5.00th=[ 383], 10.00th=[ 388], 20.00th=[ 396], 00:34:49.551 | 30.00th=[ 404], 40.00th=[ 416], 50.00th=[40633], 60.00th=[40633], 00:34:49.551 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:34:49.551 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:34:49.551 | 99.99th=[44303] 00:34:49.551 bw ( KiB/s): min= 704, max= 832, per=99.76%, avg=764.80, stdev=30.97, samples=20 00:34:49.551 iops : min= 176, max= 208, avg=191.20, stdev= 7.74, samples=20 00:34:49.551 lat (usec) : 500=49.48%, 750=0.42% 00:34:49.551 lat (msec) : 50=50.10% 00:34:49.551 cpu : usr=92.65%, sys=7.05%, ctx=26, majf=0, minf=0 00:34:49.551 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.551 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.551 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:49.551 00:34:49.551 Run status group 0 (all jobs): 00:34:49.551 READ: bw=766KiB/s (784kB/s), 766KiB/s-766KiB/s (784kB/s-784kB/s), io=7664KiB (7848kB), run=10007-10007msec 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.551 00:34:49.551 real 0m11.117s 00:34:49.551 user 0m16.216s 00:34:49.551 sys 0m1.006s 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:49.551 ************************************ 00:34:49.551 END TEST fio_dif_1_default 00:34:49.551 ************************************ 00:34:49.551 21:27:56 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:49.551 21:27:56 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:49.551 21:27:56 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:49.551 21:27:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:49.551 ************************************ 00:34:49.551 START TEST fio_dif_1_multi_subsystems 00:34:49.551 ************************************ 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:49.551 bdev_null0 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:49.551 [2024-12-05 21:27:56.256264] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:49.551 bdev_null1 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:49.551 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:49.552 { 00:34:49.552 "params": { 00:34:49.552 "name": "Nvme$subsystem", 00:34:49.552 "trtype": "$TEST_TRANSPORT", 00:34:49.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:49.552 "adrfam": "ipv4", 00:34:49.552 "trsvcid": "$NVMF_PORT", 00:34:49.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:49.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:49.552 "hdgst": ${hdgst:-false}, 00:34:49.552 "ddgst": ${ddgst:-false} 00:34:49.552 }, 00:34:49.552 "method": "bdev_nvme_attach_controller" 00:34:49.552 } 00:34:49.552 EOF 00:34:49.552 )") 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:49.552 { 00:34:49.552 "params": { 00:34:49.552 "name": "Nvme$subsystem", 00:34:49.552 "trtype": "$TEST_TRANSPORT", 00:34:49.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:49.552 "adrfam": "ipv4", 00:34:49.552 "trsvcid": "$NVMF_PORT", 00:34:49.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:49.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:49.552 "hdgst": ${hdgst:-false}, 00:34:49.552 "ddgst": ${ddgst:-false} 00:34:49.552 }, 00:34:49.552 "method": "bdev_nvme_attach_controller" 00:34:49.552 } 00:34:49.552 EOF 00:34:49.552 )") 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:49.552 "params": { 00:34:49.552 "name": "Nvme0", 00:34:49.552 "trtype": "tcp", 00:34:49.552 "traddr": "10.0.0.2", 00:34:49.552 "adrfam": "ipv4", 00:34:49.552 "trsvcid": "4420", 00:34:49.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:49.552 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:49.552 "hdgst": false, 00:34:49.552 "ddgst": false 00:34:49.552 }, 00:34:49.552 "method": "bdev_nvme_attach_controller" 00:34:49.552 },{ 00:34:49.552 "params": { 00:34:49.552 "name": "Nvme1", 00:34:49.552 "trtype": "tcp", 00:34:49.552 "traddr": "10.0.0.2", 00:34:49.552 "adrfam": "ipv4", 00:34:49.552 "trsvcid": "4420", 00:34:49.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:49.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:49.552 "hdgst": false, 00:34:49.552 "ddgst": false 00:34:49.552 }, 00:34:49.552 "method": "bdev_nvme_attach_controller" 00:34:49.552 }' 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:49.552 21:27:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:49.552 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:49.552 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:49.552 fio-3.35 00:34:49.552 Starting 2 threads 00:34:59.567 00:34:59.567 filename0: (groupid=0, jobs=1): err= 0: pid=1567823: Thu Dec 5 21:28:07 2024 00:34:59.567 read: IOPS=191, BW=764KiB/s (783kB/s)(7648KiB/10005msec) 00:34:59.567 slat (nsec): min=5896, max=42248, avg=8100.37, stdev=2860.12 00:34:59.567 clat (usec): min=391, max=42544, avg=20907.72, stdev=20332.92 00:34:59.567 lat (usec): min=397, max=42551, avg=20915.82, stdev=20331.99 00:34:59.567 clat percentiles (usec): 00:34:59.567 | 1.00th=[ 412], 5.00th=[ 461], 10.00th=[ 494], 20.00th=[ 594], 00:34:59.567 | 30.00th=[ 627], 40.00th=[ 930], 50.00th=[ 1106], 60.00th=[41157], 00:34:59.567 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:34:59.567 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:59.567 | 99.99th=[42730] 00:34:59.567 bw ( KiB/s): min= 672, max= 832, per=49.40%, avg=762.95, stdev=30.66, samples=19 00:34:59.567 iops : min= 168, max= 208, avg=190.74, stdev= 7.67, samples=19 00:34:59.567 lat (usec) : 500=11.51%, 750=26.20%, 1000=9.99% 00:34:59.567 lat (msec) : 2=2.51%, 50=49.79% 00:34:59.567 cpu : usr=96.25%, sys=3.49%, ctx=14, majf=0, minf=0 00:34:59.567 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.567 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.567 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:59.567 filename1: (groupid=0, jobs=1): err= 0: pid=1567824: Thu Dec 5 21:28:07 2024 00:34:59.567 read: IOPS=195, BW=781KiB/s (800kB/s)(7840KiB/10041msec) 00:34:59.567 slat (nsec): min=5929, max=42749, avg=8001.99, stdev=2785.28 00:34:59.567 clat (usec): min=385, max=42610, avg=20468.94, stdev=20387.77 00:34:59.567 lat (usec): min=391, max=42616, avg=20476.94, stdev=20386.92 00:34:59.567 clat percentiles (usec): 00:34:59.567 | 1.00th=[ 400], 5.00th=[ 412], 10.00th=[ 420], 20.00th=[ 578], 00:34:59.567 | 30.00th=[ 603], 40.00th=[ 652], 50.00th=[ 996], 60.00th=[41157], 00:34:59.567 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:34:59.567 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:59.567 | 99.99th=[42730] 00:34:59.567 bw ( KiB/s): min= 704, max= 832, per=50.70%, avg=782.40, stdev=36.67, samples=20 00:34:59.567 iops : min= 176, max= 208, avg=195.60, stdev= 9.17, samples=20 00:34:59.567 lat (usec) : 500=19.08%, 750=26.84%, 1000=4.13% 00:34:59.567 lat (msec) : 2=1.17%, 50=48.78% 00:34:59.567 cpu : usr=96.56%, sys=3.19%, ctx=13, majf=0, minf=0 00:34:59.567 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.567 issued rwts: total=1960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.567 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:59.567 00:34:59.567 Run status group 0 (all jobs): 00:34:59.567 READ: bw=1542KiB/s (1579kB/s), 764KiB/s-781KiB/s (783kB/s-800kB/s), io=15.1MiB (15.9MB), run=10005-10041msec 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.826 00:34:59.826 real 0m11.596s 00:34:59.826 user 0m26.751s 00:34:59.826 sys 0m1.068s 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:59.826 21:28:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:59.826 ************************************ 00:34:59.826 END TEST fio_dif_1_multi_subsystems 00:34:59.826 ************************************ 00:34:59.826 21:28:07 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:59.826 21:28:07 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:59.826 21:28:07 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:59.826 21:28:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:59.826 ************************************ 00:34:59.826 START TEST fio_dif_rand_params 00:34:59.826 ************************************ 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.826 bdev_null0 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.826 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.827 [2024-12-05 21:28:07.921657] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:59.827 { 00:34:59.827 "params": { 00:34:59.827 "name": "Nvme$subsystem", 00:34:59.827 "trtype": "$TEST_TRANSPORT", 00:34:59.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:59.827 "adrfam": "ipv4", 00:34:59.827 "trsvcid": "$NVMF_PORT", 00:34:59.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:59.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:59.827 "hdgst": ${hdgst:-false}, 00:34:59.827 "ddgst": ${ddgst:-false} 00:34:59.827 }, 00:34:59.827 "method": "bdev_nvme_attach_controller" 00:34:59.827 } 00:34:59.827 EOF 00:34:59.827 )") 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:59.827 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:00.085 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:00.085 21:28:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:00.085 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:00.085 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:00.085 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:00.085 21:28:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:00.085 21:28:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:00.085 21:28:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:00.085 "params": { 00:35:00.085 "name": "Nvme0", 00:35:00.085 "trtype": "tcp", 00:35:00.085 "traddr": "10.0.0.2", 00:35:00.085 "adrfam": "ipv4", 00:35:00.085 "trsvcid": "4420", 00:35:00.085 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:00.085 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:00.085 "hdgst": false, 00:35:00.085 "ddgst": false 00:35:00.085 }, 00:35:00.085 "method": "bdev_nvme_attach_controller" 00:35:00.085 }' 00:35:00.085 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:00.085 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:00.085 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:00.085 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:00.085 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:00.085 21:28:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:00.085 21:28:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:00.085 21:28:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:00.085 21:28:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:00.085 21:28:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:00.344 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:00.344 ... 00:35:00.344 fio-3.35 00:35:00.344 Starting 3 threads 00:35:06.903 00:35:06.903 filename0: (groupid=0, jobs=1): err= 0: pid=1569794: Thu Dec 5 21:28:13 2024 00:35:06.903 read: IOPS=326, BW=40.8MiB/s (42.7MB/s)(204MiB/5005msec) 00:35:06.903 slat (nsec): min=6195, max=60220, avg=18232.92, stdev=7101.38 00:35:06.903 clat (usec): min=3546, max=87996, avg=9177.32, stdev=6384.58 00:35:06.903 lat (usec): min=3556, max=88009, avg=9195.55, stdev=6384.46 00:35:06.903 clat percentiles (usec): 00:35:06.903 | 1.00th=[ 5080], 5.00th=[ 5800], 10.00th=[ 6325], 20.00th=[ 7111], 00:35:06.903 | 30.00th=[ 7832], 40.00th=[ 8225], 50.00th=[ 8586], 60.00th=[ 8717], 00:35:06.903 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9765], 95.00th=[10290], 00:35:06.903 | 99.00th=[48497], 99.50th=[49546], 99.90th=[51119], 99.95th=[87557], 00:35:06.903 | 99.99th=[87557] 00:35:06.903 bw ( KiB/s): min=38912, max=47104, per=35.02%, avg=41728.00, stdev=2703.87, samples=10 00:35:06.903 iops : min= 304, max= 368, avg=326.00, stdev=21.12, samples=10 00:35:06.903 lat (msec) : 4=0.12%, 10=92.95%, 20=4.60%, 50=2.08%, 100=0.25% 00:35:06.903 cpu : usr=94.72%, sys=4.50%, ctx=97, majf=0, minf=108 00:35:06.903 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:06.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.903 issued rwts: total=1632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.903 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:06.903 filename0: (groupid=0, jobs=1): err= 0: pid=1569795: Thu Dec 5 21:28:13 2024 00:35:06.903 read: IOPS=291, BW=36.5MiB/s (38.2MB/s)(182MiB/5002msec) 00:35:06.903 slat (nsec): min=6136, max=45510, avg=17841.05, stdev=8536.86 00:35:06.903 clat (usec): min=3296, max=54133, avg=10263.41, stdev=8281.16 00:35:06.903 lat (usec): min=3305, max=54139, avg=10281.26, stdev=8280.83 00:35:06.903 clat percentiles (usec): 00:35:06.903 | 1.00th=[ 4113], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 7832], 00:35:06.903 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:35:06.903 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[11076], 00:35:06.903 | 99.00th=[51119], 99.50th=[51119], 99.90th=[53216], 99.95th=[54264], 00:35:06.903 | 99.99th=[54264] 00:35:06.903 bw ( KiB/s): min=27136, max=43264, per=31.49%, avg=37518.22, stdev=5961.44, samples=9 00:35:06.903 iops : min= 212, max= 338, avg=293.11, stdev=46.57, samples=9 00:35:06.903 lat (msec) : 4=0.82%, 10=84.92%, 20=10.14%, 50=1.85%, 100=2.26% 00:35:06.903 cpu : usr=95.06%, sys=3.92%, ctx=356, majf=0, minf=89 00:35:06.903 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:06.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.903 issued rwts: total=1459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.903 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:06.903 filename0: (groupid=0, jobs=1): err= 0: pid=1569796: Thu Dec 5 21:28:13 2024 00:35:06.903 read: IOPS=317, BW=39.7MiB/s (41.7MB/s)(200MiB/5043msec) 00:35:06.903 slat (nsec): min=6283, max=45293, avg=17682.07, stdev=8467.22 00:35:06.903 clat (usec): min=3147, max=50285, avg=9392.26, stdev=4734.07 00:35:06.903 lat (usec): min=3154, max=50299, avg=9409.94, stdev=4734.39 00:35:06.903 clat percentiles (usec): 00:35:06.903 | 1.00th=[ 3687], 5.00th=[ 5538], 10.00th=[ 5997], 20.00th=[ 6587], 00:35:06.903 | 30.00th=[ 7701], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9765], 00:35:06.903 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11600], 95.00th=[12256], 00:35:06.903 | 99.00th=[44827], 99.50th=[46924], 99.90th=[50070], 99.95th=[50070], 00:35:06.903 | 99.99th=[50070] 00:35:06.903 bw ( KiB/s): min=33280, max=48128, per=34.40%, avg=40985.60, stdev=4095.02, samples=10 00:35:06.903 iops : min= 260, max= 376, avg=320.20, stdev=31.99, samples=10 00:35:06.903 lat (msec) : 4=2.06%, 10=63.26%, 20=33.44%, 50=1.12%, 100=0.12% 00:35:06.903 cpu : usr=96.35%, sys=3.31%, ctx=5, majf=0, minf=96 00:35:06.903 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:06.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.903 issued rwts: total=1603,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.903 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:06.903 00:35:06.903 Run status group 0 (all jobs): 00:35:06.903 READ: bw=116MiB/s (122MB/s), 36.5MiB/s-40.8MiB/s (38.2MB/s-42.7MB/s), io=587MiB (615MB), run=5002-5043msec 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:06.903 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.904 bdev_null0 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.904 [2024-12-05 21:28:14.166114] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.904 bdev_null1 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.904 bdev_null2 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:06.904 { 00:35:06.904 "params": { 00:35:06.904 "name": "Nvme$subsystem", 00:35:06.904 "trtype": "$TEST_TRANSPORT", 00:35:06.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:06.904 "adrfam": "ipv4", 00:35:06.904 "trsvcid": "$NVMF_PORT", 00:35:06.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:06.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:06.904 "hdgst": ${hdgst:-false}, 00:35:06.904 "ddgst": ${ddgst:-false} 00:35:06.904 }, 00:35:06.904 "method": "bdev_nvme_attach_controller" 00:35:06.904 } 00:35:06.904 EOF 00:35:06.904 )") 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:06.904 { 00:35:06.904 "params": { 00:35:06.904 "name": "Nvme$subsystem", 00:35:06.904 "trtype": "$TEST_TRANSPORT", 00:35:06.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:06.904 "adrfam": "ipv4", 00:35:06.904 "trsvcid": "$NVMF_PORT", 00:35:06.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:06.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:06.904 "hdgst": ${hdgst:-false}, 00:35:06.904 "ddgst": ${ddgst:-false} 00:35:06.904 }, 00:35:06.904 "method": "bdev_nvme_attach_controller" 00:35:06.904 } 00:35:06.904 EOF 00:35:06.904 )") 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:06.904 { 00:35:06.904 "params": { 00:35:06.904 "name": "Nvme$subsystem", 00:35:06.904 "trtype": "$TEST_TRANSPORT", 00:35:06.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:06.904 "adrfam": "ipv4", 00:35:06.904 "trsvcid": "$NVMF_PORT", 00:35:06.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:06.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:06.904 "hdgst": ${hdgst:-false}, 00:35:06.904 "ddgst": ${ddgst:-false} 00:35:06.904 }, 00:35:06.904 "method": "bdev_nvme_attach_controller" 00:35:06.904 } 00:35:06.904 EOF 00:35:06.904 )") 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:06.904 21:28:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:06.904 "params": { 00:35:06.904 "name": "Nvme0", 00:35:06.904 "trtype": "tcp", 00:35:06.904 "traddr": "10.0.0.2", 00:35:06.904 "adrfam": "ipv4", 00:35:06.904 "trsvcid": "4420", 00:35:06.905 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:06.905 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:06.905 "hdgst": false, 00:35:06.905 "ddgst": false 00:35:06.905 }, 00:35:06.905 "method": "bdev_nvme_attach_controller" 00:35:06.905 },{ 00:35:06.905 "params": { 00:35:06.905 "name": "Nvme1", 00:35:06.905 "trtype": "tcp", 00:35:06.905 "traddr": "10.0.0.2", 00:35:06.905 "adrfam": "ipv4", 00:35:06.905 "trsvcid": "4420", 00:35:06.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:06.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:06.905 "hdgst": false, 00:35:06.905 "ddgst": false 00:35:06.905 }, 00:35:06.905 "method": "bdev_nvme_attach_controller" 00:35:06.905 },{ 00:35:06.905 "params": { 00:35:06.905 "name": "Nvme2", 00:35:06.905 "trtype": "tcp", 00:35:06.905 "traddr": "10.0.0.2", 00:35:06.905 "adrfam": "ipv4", 00:35:06.905 "trsvcid": "4420", 00:35:06.905 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:06.905 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:06.905 "hdgst": false, 00:35:06.905 "ddgst": false 00:35:06.905 }, 00:35:06.905 "method": "bdev_nvme_attach_controller" 00:35:06.905 }' 00:35:06.905 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:06.905 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:06.905 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:06.905 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:06.905 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:06.905 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:06.905 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:06.905 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:06.905 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:06.905 21:28:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:06.905 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:06.905 ... 00:35:06.905 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:06.905 ... 00:35:06.905 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:06.905 ... 00:35:06.905 fio-3.35 00:35:06.905 Starting 24 threads 00:35:19.116 00:35:19.116 filename0: (groupid=0, jobs=1): err= 0: pid=1570842: Thu Dec 5 21:28:25 2024 00:35:19.116 read: IOPS=526, BW=2105KiB/s (2155kB/s)(20.6MiB/10004msec) 00:35:19.116 slat (nsec): min=8203, max=92536, avg=27534.64, stdev=14848.09 00:35:19.116 clat (usec): min=12745, max=31770, avg=30153.85, stdev=1456.45 00:35:19.116 lat (usec): min=12763, max=31786, avg=30181.38, stdev=1456.02 00:35:19.116 clat percentiles (usec): 00:35:19.116 | 1.00th=[22938], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:19.116 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:19.116 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:19.116 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31851], 00:35:19.116 | 99.99th=[31851] 00:35:19.116 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2101.89, stdev=64.93, samples=19 00:35:19.116 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:35:19.116 lat (msec) : 20=0.91%, 50=99.09% 00:35:19.116 cpu : usr=98.60%, sys=1.01%, ctx=10, majf=0, minf=9 00:35:19.116 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:19.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.116 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.116 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.116 filename0: (groupid=0, jobs=1): err= 0: pid=1570843: Thu Dec 5 21:28:25 2024 00:35:19.116 read: IOPS=523, BW=2093KiB/s (2143kB/s)(20.4MiB/10001msec) 00:35:19.116 slat (usec): min=7, max=132, avg=34.69, stdev=20.57 00:35:19.116 clat (usec): min=18327, max=48793, avg=30225.95, stdev=1368.75 00:35:19.116 lat (usec): min=18342, max=48807, avg=30260.64, stdev=1369.02 00:35:19.116 clat percentiles (usec): 00:35:19.116 | 1.00th=[29754], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:35:19.116 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:19.116 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:19.116 | 99.00th=[31065], 99.50th=[35390], 99.90th=[48497], 99.95th=[49021], 00:35:19.116 | 99.99th=[49021] 00:35:19.116 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2088.42, stdev=74.55, samples=19 00:35:19.116 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:35:19.116 lat (msec) : 20=0.31%, 50=99.69% 00:35:19.116 cpu : usr=98.60%, sys=1.00%, ctx=12, majf=0, minf=9 00:35:19.116 IO depths : 1=5.7%, 2=11.9%, 4=24.9%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:19.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.116 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.116 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.116 filename0: (groupid=0, jobs=1): err= 0: pid=1570844: Thu Dec 5 21:28:25 2024 00:35:19.116 read: IOPS=524, BW=2097KiB/s (2148kB/s)(20.5MiB/10009msec) 00:35:19.116 slat (nsec): min=7190, max=72897, avg=24494.69, stdev=8861.13 00:35:19.116 clat (usec): min=9036, max=58662, avg=30282.37, stdev=2127.13 00:35:19.117 lat (usec): min=9044, max=58691, avg=30306.86, stdev=2127.24 00:35:19.117 clat percentiles (usec): 00:35:19.117 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:19.117 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:19.117 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:19.117 | 99.00th=[31065], 99.50th=[31327], 99.90th=[58459], 99.95th=[58459], 00:35:19.117 | 99.99th=[58459] 00:35:19.117 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2092.80, stdev=75.15, samples=20 00:35:19.117 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:35:19.117 lat (msec) : 10=0.30%, 20=0.30%, 50=99.09%, 100=0.30% 00:35:19.117 cpu : usr=98.52%, sys=1.10%, ctx=13, majf=0, minf=9 00:35:19.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:19.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.117 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.117 filename0: (groupid=0, jobs=1): err= 0: pid=1570845: Thu Dec 5 21:28:25 2024 00:35:19.117 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10010msec) 00:35:19.117 slat (nsec): min=8020, max=60625, avg=18476.85, stdev=7922.27 00:35:19.117 clat (usec): min=18810, max=36025, avg=30373.57, stdev=823.91 00:35:19.117 lat (usec): min=18825, max=36051, avg=30392.05, stdev=823.26 00:35:19.117 clat percentiles (usec): 00:35:19.117 | 1.00th=[30016], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:35:19.117 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:35:19.117 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:19.117 | 99.00th=[31327], 99.50th=[31327], 99.90th=[35914], 99.95th=[35914], 00:35:19.117 | 99.99th=[35914] 00:35:19.117 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2095.16, stdev=63.44, samples=19 00:35:19.117 iops : min= 512, max= 544, avg=523.79, stdev=15.86, samples=19 00:35:19.117 lat (msec) : 20=0.30%, 50=99.70% 00:35:19.117 cpu : usr=98.45%, sys=1.17%, ctx=13, majf=0, minf=9 00:35:19.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:19.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.117 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.117 filename0: (groupid=0, jobs=1): err= 0: pid=1570846: Thu Dec 5 21:28:25 2024 00:35:19.117 read: IOPS=526, BW=2105KiB/s (2155kB/s)(20.6MiB/10004msec) 00:35:19.117 slat (nsec): min=7563, max=89055, avg=27832.62, stdev=13460.67 00:35:19.117 clat (usec): min=10712, max=31649, avg=30136.48, stdev=1447.81 00:35:19.117 lat (usec): min=10720, max=31678, avg=30164.31, stdev=1449.12 00:35:19.117 clat percentiles (usec): 00:35:19.117 | 1.00th=[22938], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:19.117 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:19.117 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:19.117 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:35:19.117 | 99.99th=[31589] 00:35:19.117 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2101.89, stdev=64.93, samples=19 00:35:19.117 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:35:19.117 lat (msec) : 20=0.78%, 50=99.22% 00:35:19.117 cpu : usr=98.64%, sys=0.97%, ctx=12, majf=0, minf=9 00:35:19.117 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:19.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.117 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.117 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.117 filename0: (groupid=0, jobs=1): err= 0: pid=1570847: Thu Dec 5 21:28:25 2024 00:35:19.117 read: IOPS=523, BW=2096KiB/s (2146kB/s)(20.5MiB/10017msec) 00:35:19.117 slat (nsec): min=6352, max=69749, avg=24963.12, stdev=8163.65 00:35:19.117 clat (usec): min=16929, max=44423, avg=30296.28, stdev=920.91 00:35:19.117 lat (usec): min=16956, max=44440, avg=30321.24, stdev=920.57 00:35:19.117 clat percentiles (usec): 00:35:19.117 | 1.00th=[30016], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:35:19.117 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:19.117 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:19.117 | 99.00th=[31065], 99.50th=[31327], 99.90th=[35390], 99.95th=[35390], 00:35:19.117 | 99.99th=[44303] 00:35:19.117 bw ( KiB/s): min= 2048, max= 2251, per=4.16%, avg=2096.55, stdev=69.73, samples=20 00:35:19.117 iops : min= 512, max= 562, avg=524.10, stdev=17.34, samples=20 00:35:19.117 lat (msec) : 20=0.27%, 50=99.73% 00:35:19.117 cpu : usr=98.43%, sys=1.18%, ctx=13, majf=0, minf=9 00:35:19.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:19.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.117 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.117 filename0: (groupid=0, jobs=1): err= 0: pid=1570848: Thu Dec 5 21:28:25 2024 00:35:19.117 read: IOPS=527, BW=2108KiB/s (2159kB/s)(20.6MiB/10018msec) 00:35:19.117 slat (nsec): min=7553, max=60801, avg=21209.33, stdev=9055.18 00:35:19.117 clat (usec): min=13964, max=39078, avg=30186.06, stdev=1706.98 00:35:19.117 lat (usec): min=13982, max=39097, avg=30207.27, stdev=1707.27 00:35:19.117 clat percentiles (usec): 00:35:19.117 | 1.00th=[19006], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:35:19.117 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:35:19.117 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:19.117 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:35:19.117 | 99.99th=[39060] 00:35:19.117 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2105.60, stdev=77.42, samples=20 00:35:19.117 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:35:19.117 lat (msec) : 20=1.21%, 50=98.79% 00:35:19.117 cpu : usr=98.43%, sys=1.18%, ctx=13, majf=0, minf=9 00:35:19.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:19.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.117 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.117 filename0: (groupid=0, jobs=1): err= 0: pid=1570849: Thu Dec 5 21:28:25 2024 00:35:19.117 read: IOPS=524, BW=2096KiB/s (2146kB/s)(20.5MiB/10015msec) 00:35:19.117 slat (nsec): min=6638, max=88174, avg=35688.60, stdev=20024.37 00:35:19.117 clat (usec): min=18324, max=35198, avg=30182.48, stdev=766.91 00:35:19.117 lat (usec): min=18340, max=35215, avg=30218.17, stdev=767.60 00:35:19.117 clat percentiles (usec): 00:35:19.117 | 1.00th=[29754], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:35:19.117 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:19.117 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:19.117 | 99.00th=[31065], 99.50th=[31589], 99.90th=[35390], 99.95th=[35390], 00:35:19.117 | 99.99th=[35390] 00:35:19.117 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2088.42, stdev=61.13, samples=19 00:35:19.117 iops : min= 512, max= 544, avg=522.11, stdev=15.28, samples=19 00:35:19.117 lat (msec) : 20=0.30%, 50=99.70% 00:35:19.117 cpu : usr=98.63%, sys=0.98%, ctx=13, majf=0, minf=9 00:35:19.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:19.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.117 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.117 filename1: (groupid=0, jobs=1): err= 0: pid=1570850: Thu Dec 5 21:28:25 2024 00:35:19.117 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10010msec) 00:35:19.117 slat (usec): min=6, max=102, avg=35.13, stdev=20.36 00:35:19.117 clat (usec): min=10033, max=59160, avg=30154.33, stdev=1743.54 00:35:19.117 lat (usec): min=10047, max=59178, avg=30189.46, stdev=1744.45 00:35:19.117 clat percentiles (usec): 00:35:19.117 | 1.00th=[29754], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:35:19.117 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:19.117 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:19.117 | 99.00th=[31065], 99.50th=[31589], 99.90th=[47973], 99.95th=[47973], 00:35:19.117 | 99.99th=[58983] 00:35:19.117 bw ( KiB/s): min= 1923, max= 2176, per=4.16%, avg=2092.50, stdev=74.29, samples=20 00:35:19.117 iops : min= 480, max= 544, avg=523.05, stdev=18.63, samples=20 00:35:19.117 lat (msec) : 20=0.65%, 50=99.31%, 100=0.04% 00:35:19.117 cpu : usr=98.72%, sys=0.89%, ctx=11, majf=0, minf=9 00:35:19.117 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:19.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.117 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.117 filename1: (groupid=0, jobs=1): err= 0: pid=1570851: Thu Dec 5 21:28:25 2024 00:35:19.117 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10010msec) 00:35:19.117 slat (nsec): min=7478, max=98494, avg=22913.85, stdev=19071.66 00:35:19.117 clat (usec): min=10666, max=38838, avg=30332.63, stdev=794.39 00:35:19.117 lat (usec): min=10685, max=38863, avg=30355.55, stdev=791.76 00:35:19.117 clat percentiles (usec): 00:35:19.117 | 1.00th=[29492], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:35:19.117 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:35:19.117 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:35:19.117 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31589], 99.95th=[31589], 00:35:19.117 | 99.99th=[39060] 00:35:19.117 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2095.16, stdev=63.44, samples=19 00:35:19.117 iops : min= 512, max= 544, avg=523.79, stdev=15.86, samples=19 00:35:19.117 lat (msec) : 20=0.30%, 50=99.70% 00:35:19.117 cpu : usr=98.28%, sys=1.34%, ctx=12, majf=0, minf=9 00:35:19.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:19.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.118 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.118 filename1: (groupid=0, jobs=1): err= 0: pid=1570852: Thu Dec 5 21:28:25 2024 00:35:19.118 read: IOPS=524, BW=2096KiB/s (2146kB/s)(20.5MiB/10015msec) 00:35:19.118 slat (nsec): min=6627, max=68049, avg=27559.52, stdev=11785.89 00:35:19.118 clat (usec): min=18523, max=40587, avg=30295.12, stdev=1021.85 00:35:19.118 lat (usec): min=18560, max=40614, avg=30322.68, stdev=1021.45 00:35:19.118 clat percentiles (usec): 00:35:19.118 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:19.118 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:19.118 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:19.118 | 99.00th=[31065], 99.50th=[34866], 99.90th=[40633], 99.95th=[40633], 00:35:19.118 | 99.99th=[40633] 00:35:19.118 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2088.42, stdev=61.13, samples=19 00:35:19.118 iops : min= 512, max= 544, avg=522.11, stdev=15.28, samples=19 00:35:19.118 lat (msec) : 20=0.30%, 50=99.70% 00:35:19.118 cpu : usr=98.59%, sys=1.00%, ctx=53, majf=0, minf=9 00:35:19.118 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:19.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.118 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.118 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.118 filename1: (groupid=0, jobs=1): err= 0: pid=1570853: Thu Dec 5 21:28:25 2024 00:35:19.118 read: IOPS=523, BW=2092KiB/s (2143kB/s)(20.4MiB/10002msec) 00:35:19.118 slat (nsec): min=7585, max=90035, avg=35434.76, stdev=20608.46 00:35:19.118 clat (usec): min=17269, max=48747, avg=30276.01, stdev=1545.50 00:35:19.118 lat (usec): min=17278, max=48761, avg=30311.44, stdev=1544.90 00:35:19.118 clat percentiles (usec): 00:35:19.118 | 1.00th=[29754], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:35:19.118 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:19.118 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:19.118 | 99.00th=[31589], 99.50th=[40109], 99.90th=[48497], 99.95th=[48497], 00:35:19.118 | 99.99th=[48497] 00:35:19.118 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2087.58, stdev=67.75, samples=19 00:35:19.118 iops : min= 480, max= 544, avg=521.89, stdev=16.94, samples=19 00:35:19.118 lat (msec) : 20=0.31%, 50=99.69% 00:35:19.118 cpu : usr=98.47%, sys=1.14%, ctx=10, majf=0, minf=11 00:35:19.118 IO depths : 1=2.5%, 2=8.5%, 4=24.2%, 8=54.6%, 16=10.1%, 32=0.0%, >=64=0.0% 00:35:19.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.118 complete : 0=0.0%, 4=94.2%, 8=0.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.118 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.118 filename1: (groupid=0, jobs=1): err= 0: pid=1570854: Thu Dec 5 21:28:25 2024 00:35:19.118 read: IOPS=526, BW=2105KiB/s (2155kB/s)(20.6MiB/10004msec) 00:35:19.118 slat (nsec): min=7634, max=91041, avg=28445.07, stdev=13578.29 00:35:19.118 clat (usec): min=12799, max=31664, avg=30132.28, stdev=1446.43 00:35:19.118 lat (usec): min=12816, max=31692, avg=30160.72, stdev=1446.98 00:35:19.118 clat percentiles (usec): 00:35:19.118 | 1.00th=[22938], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:19.118 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:19.118 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:19.118 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:35:19.118 | 99.99th=[31589] 00:35:19.118 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2101.89, stdev=64.93, samples=19 00:35:19.118 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:35:19.118 lat (msec) : 20=0.91%, 50=99.09% 00:35:19.118 cpu : usr=98.56%, sys=1.05%, ctx=11, majf=0, minf=9 00:35:19.118 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:19.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.118 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.118 filename1: (groupid=0, jobs=1): err= 0: pid=1570855: Thu Dec 5 21:28:25 2024 00:35:19.118 read: IOPS=524, BW=2097KiB/s (2148kB/s)(20.5MiB/10009msec) 00:35:19.118 slat (nsec): min=7425, max=88014, avg=28419.10, stdev=15109.48 00:35:19.118 clat (usec): min=9035, max=67211, avg=30251.94, stdev=2902.33 00:35:19.118 lat (usec): min=9050, max=67225, avg=30280.36, stdev=2902.26 00:35:19.118 clat percentiles (usec): 00:35:19.118 | 1.00th=[21365], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:19.118 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:19.118 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:19.118 | 99.00th=[35914], 99.50th=[44827], 99.90th=[67634], 99.95th=[67634], 00:35:19.118 | 99.99th=[67634] 00:35:19.118 bw ( KiB/s): min= 1923, max= 2176, per=4.16%, avg=2092.95, stdev=75.33, samples=20 00:35:19.118 iops : min= 480, max= 544, avg=523.20, stdev=18.92, samples=20 00:35:19.118 lat (msec) : 10=0.30%, 20=0.30%, 50=99.09%, 100=0.30% 00:35:19.118 cpu : usr=98.73%, sys=0.88%, ctx=7, majf=0, minf=9 00:35:19.118 IO depths : 1=5.4%, 2=11.0%, 4=22.6%, 8=53.5%, 16=7.5%, 32=0.0%, >=64=0.0% 00:35:19.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.118 complete : 0=0.0%, 4=93.5%, 8=1.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.118 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.118 filename1: (groupid=0, jobs=1): err= 0: pid=1570856: Thu Dec 5 21:28:25 2024 00:35:19.118 read: IOPS=526, BW=2105KiB/s (2155kB/s)(20.6MiB/10005msec) 00:35:19.118 slat (nsec): min=7727, max=92491, avg=28492.61, stdev=13964.21 00:35:19.118 clat (usec): min=12866, max=40159, avg=30139.98, stdev=1476.30 00:35:19.118 lat (usec): min=12892, max=40184, avg=30168.47, stdev=1476.54 00:35:19.118 clat percentiles (usec): 00:35:19.118 | 1.00th=[22676], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:19.118 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:19.118 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:19.118 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:35:19.118 | 99.99th=[40109] 00:35:19.118 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2101.89, stdev=64.93, samples=19 00:35:19.118 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:35:19.118 lat (msec) : 20=0.91%, 50=99.09% 00:35:19.118 cpu : usr=98.57%, sys=1.04%, ctx=13, majf=0, minf=9 00:35:19.118 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:19.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.118 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.118 filename1: (groupid=0, jobs=1): err= 0: pid=1570857: Thu Dec 5 21:28:25 2024 00:35:19.118 read: IOPS=526, BW=2105KiB/s (2155kB/s)(20.6MiB/10004msec) 00:35:19.118 slat (nsec): min=7495, max=88503, avg=27882.35, stdev=13541.78 00:35:19.118 clat (usec): min=13150, max=40041, avg=30144.43, stdev=1447.96 00:35:19.118 lat (usec): min=13176, max=40061, avg=30172.31, stdev=1448.74 00:35:19.118 clat percentiles (usec): 00:35:19.118 | 1.00th=[22676], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:19.118 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:19.118 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:19.118 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:35:19.118 | 99.99th=[40109] 00:35:19.118 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2101.89, stdev=63.38, samples=19 00:35:19.118 iops : min= 512, max= 544, avg=525.47, stdev=15.84, samples=19 00:35:19.118 lat (msec) : 20=0.87%, 50=99.13% 00:35:19.118 cpu : usr=98.75%, sys=0.86%, ctx=9, majf=0, minf=9 00:35:19.118 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:19.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.118 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.118 filename2: (groupid=0, jobs=1): err= 0: pid=1570858: Thu Dec 5 21:28:25 2024 00:35:19.118 read: IOPS=524, BW=2096KiB/s (2147kB/s)(20.5MiB/10010msec) 00:35:19.118 slat (usec): min=6, max=121, avg=33.83, stdev=20.94 00:35:19.118 clat (usec): min=10009, max=47849, avg=30207.59, stdev=1613.78 00:35:19.118 lat (usec): min=10016, max=47866, avg=30241.43, stdev=1614.13 00:35:19.118 clat percentiles (usec): 00:35:19.118 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:19.118 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:19.118 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:19.118 | 99.00th=[31065], 99.50th=[31589], 99.90th=[47973], 99.95th=[47973], 00:35:19.118 | 99.99th=[47973] 00:35:19.118 bw ( KiB/s): min= 1923, max= 2183, per=4.16%, avg=2092.50, stdev=69.11, samples=20 00:35:19.118 iops : min= 480, max= 545, avg=523.05, stdev=17.32, samples=20 00:35:19.118 lat (msec) : 20=0.57%, 50=99.43% 00:35:19.118 cpu : usr=98.56%, sys=1.06%, ctx=8, majf=0, minf=9 00:35:19.118 IO depths : 1=2.4%, 2=8.7%, 4=25.0%, 8=53.8%, 16=10.0%, 32=0.0%, >=64=0.0% 00:35:19.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.118 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.118 issued rwts: total=5246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.118 filename2: (groupid=0, jobs=1): err= 0: pid=1570859: Thu Dec 5 21:28:25 2024 00:35:19.118 read: IOPS=524, BW=2097KiB/s (2148kB/s)(20.5MiB/10009msec) 00:35:19.118 slat (nsec): min=9806, max=78417, avg=27009.36, stdev=10839.28 00:35:19.118 clat (usec): min=9017, max=57761, avg=30267.29, stdev=2093.56 00:35:19.118 lat (usec): min=9032, max=57778, avg=30294.30, stdev=2093.24 00:35:19.118 clat percentiles (usec): 00:35:19.118 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:19.118 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:19.118 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:19.118 | 99.00th=[31065], 99.50th=[31327], 99.90th=[57934], 99.95th=[57934], 00:35:19.118 | 99.99th=[57934] 00:35:19.118 bw ( KiB/s): min= 1923, max= 2176, per=4.16%, avg=2092.95, stdev=74.79, samples=20 00:35:19.118 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:35:19.119 lat (msec) : 10=0.30%, 20=0.30%, 50=99.09%, 100=0.30% 00:35:19.119 cpu : usr=98.58%, sys=1.03%, ctx=10, majf=0, minf=9 00:35:19.119 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:19.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.119 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.119 filename2: (groupid=0, jobs=1): err= 0: pid=1570860: Thu Dec 5 21:28:25 2024 00:35:19.119 read: IOPS=526, BW=2105KiB/s (2155kB/s)(20.6MiB/10003msec) 00:35:19.119 slat (nsec): min=7487, max=66107, avg=16389.83, stdev=9882.49 00:35:19.119 clat (usec): min=12663, max=32759, avg=30270.94, stdev=1469.78 00:35:19.119 lat (usec): min=12681, max=32772, avg=30287.33, stdev=1468.46 00:35:19.119 clat percentiles (usec): 00:35:19.119 | 1.00th=[22938], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:35:19.119 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:35:19.119 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:19.119 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:35:19.119 | 99.99th=[32637] 00:35:19.119 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2101.89, stdev=64.93, samples=19 00:35:19.119 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:35:19.119 lat (msec) : 20=0.91%, 50=99.09% 00:35:19.119 cpu : usr=98.57%, sys=1.03%, ctx=64, majf=0, minf=9 00:35:19.119 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:19.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.119 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.119 filename2: (groupid=0, jobs=1): err= 0: pid=1570861: Thu Dec 5 21:28:25 2024 00:35:19.119 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10009msec) 00:35:19.119 slat (nsec): min=8017, max=64999, avg=25083.17, stdev=8496.22 00:35:19.119 clat (usec): min=9157, max=58024, avg=30285.38, stdev=2062.06 00:35:19.119 lat (usec): min=9182, max=58061, avg=30310.47, stdev=2062.59 00:35:19.119 clat percentiles (usec): 00:35:19.119 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:19.119 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:19.119 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:19.119 | 99.00th=[31065], 99.50th=[31327], 99.90th=[57934], 99.95th=[57934], 00:35:19.119 | 99.99th=[57934] 00:35:19.119 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2092.80, stdev=75.15, samples=20 00:35:19.119 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:35:19.119 lat (msec) : 10=0.27%, 20=0.30%, 50=99.12%, 100=0.30% 00:35:19.119 cpu : usr=98.48%, sys=1.12%, ctx=24, majf=0, minf=9 00:35:19.119 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:19.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.119 issued rwts: total=5246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.119 filename2: (groupid=0, jobs=1): err= 0: pid=1570862: Thu Dec 5 21:28:25 2024 00:35:19.119 read: IOPS=524, BW=2098KiB/s (2149kB/s)(20.5MiB/10005msec) 00:35:19.119 slat (nsec): min=7107, max=68897, avg=23698.35, stdev=8145.07 00:35:19.119 clat (usec): min=18810, max=39299, avg=30308.93, stdev=803.67 00:35:19.119 lat (usec): min=18825, max=39322, avg=30332.63, stdev=803.29 00:35:19.119 clat percentiles (usec): 00:35:19.119 | 1.00th=[30016], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:35:19.119 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:19.119 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:19.119 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31327], 99.95th=[31589], 00:35:19.119 | 99.99th=[39060] 00:35:19.119 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2095.16, stdev=63.44, samples=19 00:35:19.119 iops : min= 512, max= 544, avg=523.79, stdev=15.86, samples=19 00:35:19.119 lat (msec) : 20=0.30%, 50=99.70% 00:35:19.119 cpu : usr=98.53%, sys=1.07%, ctx=13, majf=0, minf=9 00:35:19.119 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:19.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.119 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.119 filename2: (groupid=0, jobs=1): err= 0: pid=1570863: Thu Dec 5 21:28:25 2024 00:35:19.119 read: IOPS=524, BW=2097KiB/s (2148kB/s)(20.5MiB/10009msec) 00:35:19.119 slat (nsec): min=7979, max=70668, avg=24822.62, stdev=8302.51 00:35:19.119 clat (usec): min=9172, max=58239, avg=30300.34, stdev=2106.33 00:35:19.119 lat (usec): min=9193, max=58255, avg=30325.16, stdev=2106.32 00:35:19.119 clat percentiles (usec): 00:35:19.119 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:35:19.119 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:19.119 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:19.119 | 99.00th=[31065], 99.50th=[31327], 99.90th=[57934], 99.95th=[58459], 00:35:19.119 | 99.99th=[58459] 00:35:19.119 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2092.80, stdev=75.15, samples=20 00:35:19.119 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:35:19.119 lat (msec) : 10=0.30%, 20=0.30%, 50=99.09%, 100=0.30% 00:35:19.119 cpu : usr=98.41%, sys=1.20%, ctx=17, majf=0, minf=9 00:35:19.119 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:19.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.119 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.119 filename2: (groupid=0, jobs=1): err= 0: pid=1570864: Thu Dec 5 21:28:25 2024 00:35:19.119 read: IOPS=526, BW=2105KiB/s (2155kB/s)(20.6MiB/10004msec) 00:35:19.119 slat (nsec): min=7805, max=89032, avg=28353.78, stdev=13639.67 00:35:19.119 clat (usec): min=12814, max=31681, avg=30140.17, stdev=1439.02 00:35:19.119 lat (usec): min=12833, max=31708, avg=30168.52, stdev=1439.53 00:35:19.119 clat percentiles (usec): 00:35:19.119 | 1.00th=[22152], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:19.119 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:19.119 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:19.119 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:35:19.119 | 99.99th=[31589] 00:35:19.119 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2101.89, stdev=64.93, samples=19 00:35:19.119 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:35:19.119 lat (msec) : 20=0.91%, 50=99.09% 00:35:19.119 cpu : usr=98.59%, sys=1.02%, ctx=13, majf=0, minf=9 00:35:19.119 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:19.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.119 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.119 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.119 filename2: (groupid=0, jobs=1): err= 0: pid=1570865: Thu Dec 5 21:28:25 2024 00:35:19.119 read: IOPS=523, BW=2092KiB/s (2143kB/s)(20.4MiB/10002msec) 00:35:19.119 slat (nsec): min=7491, max=87503, avg=26958.90, stdev=13254.45 00:35:19.119 clat (usec): min=12467, max=66866, avg=30323.75, stdev=2299.97 00:35:19.119 lat (usec): min=12475, max=66909, avg=30350.71, stdev=2300.82 00:35:19.119 clat percentiles (usec): 00:35:19.119 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:19.119 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:19.119 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:19.119 | 99.00th=[31065], 99.50th=[31327], 99.90th=[66847], 99.95th=[66847], 00:35:19.119 | 99.99th=[66847] 00:35:19.119 bw ( KiB/s): min= 1923, max= 2176, per=4.13%, avg=2081.84, stdev=71.56, samples=19 00:35:19.119 iops : min= 480, max= 544, avg=520.42, stdev=17.98, samples=19 00:35:19.119 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:35:19.119 cpu : usr=98.67%, sys=0.94%, ctx=15, majf=0, minf=9 00:35:19.119 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:19.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.119 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:19.119 00:35:19.119 Run status group 0 (all jobs): 00:35:19.119 READ: bw=49.1MiB/s (51.5MB/s), 2092KiB/s-2108KiB/s (2143kB/s-2159kB/s), io=492MiB (516MB), run=10001-10018msec 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.119 21:28:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.119 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.120 bdev_null0 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.120 [2024-12-05 21:28:26.065682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.120 bdev_null1 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:19.120 { 00:35:19.120 "params": { 00:35:19.120 "name": "Nvme$subsystem", 00:35:19.120 "trtype": "$TEST_TRANSPORT", 00:35:19.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:19.120 "adrfam": "ipv4", 00:35:19.120 "trsvcid": "$NVMF_PORT", 00:35:19.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:19.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:19.120 "hdgst": ${hdgst:-false}, 00:35:19.120 "ddgst": ${ddgst:-false} 00:35:19.120 }, 00:35:19.120 "method": "bdev_nvme_attach_controller" 00:35:19.120 } 00:35:19.120 EOF 00:35:19.120 )") 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:19.120 { 00:35:19.120 "params": { 00:35:19.120 "name": "Nvme$subsystem", 00:35:19.120 "trtype": "$TEST_TRANSPORT", 00:35:19.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:19.120 "adrfam": "ipv4", 00:35:19.120 "trsvcid": "$NVMF_PORT", 00:35:19.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:19.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:19.120 "hdgst": ${hdgst:-false}, 00:35:19.120 "ddgst": ${ddgst:-false} 00:35:19.120 }, 00:35:19.120 "method": "bdev_nvme_attach_controller" 00:35:19.120 } 00:35:19.120 EOF 00:35:19.120 )") 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:19.120 21:28:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:19.120 "params": { 00:35:19.120 "name": "Nvme0", 00:35:19.120 "trtype": "tcp", 00:35:19.120 "traddr": "10.0.0.2", 00:35:19.120 "adrfam": "ipv4", 00:35:19.120 "trsvcid": "4420", 00:35:19.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:19.120 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:19.120 "hdgst": false, 00:35:19.120 "ddgst": false 00:35:19.120 }, 00:35:19.120 "method": "bdev_nvme_attach_controller" 00:35:19.120 },{ 00:35:19.120 "params": { 00:35:19.120 "name": "Nvme1", 00:35:19.120 "trtype": "tcp", 00:35:19.120 "traddr": "10.0.0.2", 00:35:19.120 "adrfam": "ipv4", 00:35:19.120 "trsvcid": "4420", 00:35:19.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:19.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:19.121 "hdgst": false, 00:35:19.121 "ddgst": false 00:35:19.121 }, 00:35:19.121 "method": "bdev_nvme_attach_controller" 00:35:19.121 }' 00:35:19.121 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:19.121 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:19.121 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:19.121 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:19.121 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:19.121 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:19.121 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:19.121 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:19.121 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:19.121 21:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:19.121 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:19.121 ... 00:35:19.121 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:19.121 ... 00:35:19.121 fio-3.35 00:35:19.121 Starting 4 threads 00:35:24.401 00:35:24.401 filename0: (groupid=0, jobs=1): err= 0: pid=1572820: Thu Dec 5 21:28:32 2024 00:35:24.401 read: IOPS=2655, BW=20.7MiB/s (21.8MB/s)(104MiB/5001msec) 00:35:24.401 slat (nsec): min=6086, max=73198, avg=16184.52, stdev=12092.05 00:35:24.401 clat (usec): min=637, max=5731, avg=2956.76, stdev=385.80 00:35:24.401 lat (usec): min=643, max=5758, avg=2972.95, stdev=387.72 00:35:24.401 clat percentiles (usec): 00:35:24.401 | 1.00th=[ 1745], 5.00th=[ 2278], 10.00th=[ 2507], 20.00th=[ 2737], 00:35:24.401 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:35:24.402 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3294], 95.00th=[ 3458], 00:35:24.402 | 99.00th=[ 4080], 99.50th=[ 4293], 99.90th=[ 4817], 99.95th=[ 5211], 00:35:24.402 | 99.99th=[ 5276] 00:35:24.402 bw ( KiB/s): min=20448, max=22016, per=25.58%, avg=21149.56, stdev=551.30, samples=9 00:35:24.402 iops : min= 2556, max= 2752, avg=2643.67, stdev=68.90, samples=9 00:35:24.402 lat (usec) : 750=0.02%, 1000=0.03% 00:35:24.402 lat (msec) : 2=1.81%, 4=97.06%, 10=1.09% 00:35:24.402 cpu : usr=96.88%, sys=2.76%, ctx=12, majf=0, minf=9 00:35:24.402 IO depths : 1=1.2%, 2=12.6%, 4=60.3%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.402 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.402 issued rwts: total=13281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.402 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:24.402 filename0: (groupid=0, jobs=1): err= 0: pid=1572821: Thu Dec 5 21:28:32 2024 00:35:24.402 read: IOPS=2534, BW=19.8MiB/s (20.8MB/s)(99.0MiB/5001msec) 00:35:24.402 slat (usec): min=6, max=157, avg=17.43, stdev=13.08 00:35:24.402 clat (usec): min=535, max=5639, avg=3093.79, stdev=416.52 00:35:24.402 lat (usec): min=547, max=5650, avg=3111.22, stdev=416.70 00:35:24.402 clat percentiles (usec): 00:35:24.402 | 1.00th=[ 2073], 5.00th=[ 2540], 10.00th=[ 2737], 20.00th=[ 2900], 00:35:24.402 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3097], 00:35:24.402 | 70.00th=[ 3163], 80.00th=[ 3261], 90.00th=[ 3523], 95.00th=[ 3785], 00:35:24.402 | 99.00th=[ 4752], 99.50th=[ 5014], 99.90th=[ 5473], 99.95th=[ 5538], 00:35:24.402 | 99.99th=[ 5604] 00:35:24.402 bw ( KiB/s): min=19632, max=21104, per=24.50%, avg=20255.00, stdev=509.40, samples=9 00:35:24.402 iops : min= 2454, max= 2638, avg=2531.78, stdev=63.78, samples=9 00:35:24.402 lat (usec) : 750=0.02%, 1000=0.05% 00:35:24.402 lat (msec) : 2=0.79%, 4=95.98%, 10=3.16% 00:35:24.402 cpu : usr=97.00%, sys=2.66%, ctx=8, majf=0, minf=9 00:35:24.402 IO depths : 1=1.2%, 2=10.9%, 4=62.4%, 8=25.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.402 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.402 issued rwts: total=12677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.402 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:24.402 filename1: (groupid=0, jobs=1): err= 0: pid=1572823: Thu Dec 5 21:28:32 2024 00:35:24.402 read: IOPS=2518, BW=19.7MiB/s (20.6MB/s)(98.4MiB/5001msec) 00:35:24.402 slat (nsec): min=5867, max=63911, avg=16241.68, stdev=11706.55 00:35:24.402 clat (usec): min=551, max=5744, avg=3123.83, stdev=420.42 00:35:24.402 lat (usec): min=571, max=5771, avg=3140.07, stdev=420.64 00:35:24.402 clat percentiles (usec): 00:35:24.402 | 1.00th=[ 2024], 5.00th=[ 2638], 10.00th=[ 2802], 20.00th=[ 2933], 00:35:24.402 | 30.00th=[ 2966], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3130], 00:35:24.402 | 70.00th=[ 3195], 80.00th=[ 3294], 90.00th=[ 3556], 95.00th=[ 3818], 00:35:24.402 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5473], 99.95th=[ 5538], 00:35:24.402 | 99.99th=[ 5735] 00:35:24.402 bw ( KiB/s): min=19360, max=21024, per=24.40%, avg=20172.44, stdev=586.88, samples=9 00:35:24.402 iops : min= 2420, max= 2628, avg=2521.56, stdev=73.36, samples=9 00:35:24.402 lat (usec) : 750=0.12%, 1000=0.10% 00:35:24.402 lat (msec) : 2=0.75%, 4=95.39%, 10=3.64% 00:35:24.402 cpu : usr=96.30%, sys=3.34%, ctx=9, majf=0, minf=9 00:35:24.402 IO depths : 1=0.3%, 2=9.4%, 4=63.2%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.402 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.402 issued rwts: total=12593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.402 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:24.402 filename1: (groupid=0, jobs=1): err= 0: pid=1572824: Thu Dec 5 21:28:32 2024 00:35:24.402 read: IOPS=2626, BW=20.5MiB/s (21.5MB/s)(103MiB/5002msec) 00:35:24.402 slat (nsec): min=5860, max=64130, avg=15576.26, stdev=11255.67 00:35:24.402 clat (usec): min=625, max=5503, avg=2990.57, stdev=406.96 00:35:24.402 lat (usec): min=634, max=5520, avg=3006.15, stdev=408.50 00:35:24.402 clat percentiles (usec): 00:35:24.402 | 1.00th=[ 1696], 5.00th=[ 2311], 10.00th=[ 2540], 20.00th=[ 2769], 00:35:24.402 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3064], 00:35:24.402 | 70.00th=[ 3130], 80.00th=[ 3195], 90.00th=[ 3359], 95.00th=[ 3589], 00:35:24.402 | 99.00th=[ 4293], 99.50th=[ 4686], 99.90th=[ 5014], 99.95th=[ 5211], 00:35:24.402 | 99.99th=[ 5473] 00:35:24.402 bw ( KiB/s): min=20160, max=22048, per=25.44%, avg=21034.67, stdev=635.48, samples=9 00:35:24.402 iops : min= 2520, max= 2756, avg=2629.33, stdev=79.44, samples=9 00:35:24.402 lat (usec) : 750=0.03%, 1000=0.12% 00:35:24.402 lat (msec) : 2=1.55%, 4=96.61%, 10=1.68% 00:35:24.402 cpu : usr=96.44%, sys=3.22%, ctx=6, majf=0, minf=9 00:35:24.402 IO depths : 1=1.2%, 2=13.5%, 4=59.1%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.402 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.402 issued rwts: total=13140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.402 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:24.402 00:35:24.402 Run status group 0 (all jobs): 00:35:24.402 READ: bw=80.7MiB/s (84.7MB/s), 19.7MiB/s-20.7MiB/s (20.6MB/s-21.8MB/s), io=404MiB (423MB), run=5001-5002msec 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.661 21:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:24.662 21:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.662 00:35:24.662 real 0m24.765s 00:35:24.662 user 4m52.862s 00:35:24.662 sys 0m4.876s 00:35:24.662 21:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:24.662 21:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:24.662 ************************************ 00:35:24.662 END TEST fio_dif_rand_params 00:35:24.662 ************************************ 00:35:24.662 21:28:32 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:24.662 21:28:32 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:24.662 21:28:32 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:24.662 21:28:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:24.662 ************************************ 00:35:24.662 START TEST fio_dif_digest 00:35:24.662 ************************************ 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:24.662 bdev_null0 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:24.662 [2024-12-05 21:28:32.757790] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:24.662 { 00:35:24.662 "params": { 00:35:24.662 "name": "Nvme$subsystem", 00:35:24.662 "trtype": "$TEST_TRANSPORT", 00:35:24.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:24.662 "adrfam": "ipv4", 00:35:24.662 "trsvcid": "$NVMF_PORT", 00:35:24.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:24.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:24.662 "hdgst": ${hdgst:-false}, 00:35:24.662 "ddgst": ${ddgst:-false} 00:35:24.662 }, 00:35:24.662 "method": "bdev_nvme_attach_controller" 00:35:24.662 } 00:35:24.662 EOF 00:35:24.662 )") 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:24.662 21:28:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:35:24.921 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:24.921 21:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:24.921 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:24.921 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:35:24.921 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:24.921 21:28:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:35:24.921 21:28:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:35:24.921 21:28:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:24.921 "params": { 00:35:24.921 "name": "Nvme0", 00:35:24.921 "trtype": "tcp", 00:35:24.921 "traddr": "10.0.0.2", 00:35:24.921 "adrfam": "ipv4", 00:35:24.921 "trsvcid": "4420", 00:35:24.921 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:24.921 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:24.921 "hdgst": true, 00:35:24.921 "ddgst": true 00:35:24.921 }, 00:35:24.921 "method": "bdev_nvme_attach_controller" 00:35:24.921 }' 00:35:24.921 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:24.921 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:24.921 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:24.921 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:24.921 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:24.921 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:24.921 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:24.921 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:24.921 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:24.921 21:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.179 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:25.179 ... 00:35:25.179 fio-3.35 00:35:25.179 Starting 3 threads 00:35:37.387 00:35:37.388 filename0: (groupid=0, jobs=1): err= 0: pid=1574091: Thu Dec 5 21:28:43 2024 00:35:37.388 read: IOPS=294, BW=36.8MiB/s (38.6MB/s)(370MiB/10043msec) 00:35:37.388 slat (usec): min=6, max=186, avg=11.54, stdev= 3.71 00:35:37.388 clat (usec): min=7957, max=48534, avg=10154.99, stdev=1190.59 00:35:37.388 lat (usec): min=7969, max=48546, avg=10166.52, stdev=1190.52 00:35:37.388 clat percentiles (usec): 00:35:37.388 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:35:37.388 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:35:37.388 | 70.00th=[10552], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:35:37.388 | 99.00th=[11994], 99.50th=[12387], 99.90th=[13960], 99.95th=[45876], 00:35:37.388 | 99.99th=[48497] 00:35:37.388 bw ( KiB/s): min=36352, max=39168, per=35.86%, avg=37849.60, stdev=671.04, samples=20 00:35:37.388 iops : min= 284, max= 306, avg=295.70, stdev= 5.24, samples=20 00:35:37.388 lat (msec) : 10=42.18%, 20=57.76%, 50=0.07% 00:35:37.388 cpu : usr=94.33%, sys=5.36%, ctx=23, majf=0, minf=9 00:35:37.388 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:37.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.388 issued rwts: total=2959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.388 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:37.388 filename0: (groupid=0, jobs=1): err= 0: pid=1574092: Thu Dec 5 21:28:43 2024 00:35:37.388 read: IOPS=266, BW=33.3MiB/s (34.9MB/s)(335MiB/10046msec) 00:35:37.388 slat (nsec): min=6388, max=38462, avg=11592.50, stdev=1784.91 00:35:37.388 clat (usec): min=8721, max=48023, avg=11232.80, stdev=1224.73 00:35:37.388 lat (usec): min=8733, max=48035, avg=11244.40, stdev=1224.79 00:35:37.388 clat percentiles (usec): 00:35:37.388 | 1.00th=[ 9634], 5.00th=[10028], 10.00th=[10159], 20.00th=[10552], 00:35:37.388 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:35:37.388 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12125], 95.00th=[12518], 00:35:37.388 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13960], 99.95th=[45351], 00:35:37.388 | 99.99th=[47973] 00:35:37.388 bw ( KiB/s): min=33024, max=35072, per=32.42%, avg=34214.40, stdev=513.85, samples=20 00:35:37.388 iops : min= 258, max= 274, avg=267.30, stdev= 4.01, samples=20 00:35:37.388 lat (msec) : 10=5.27%, 20=94.66%, 50=0.07% 00:35:37.388 cpu : usr=94.62%, sys=5.06%, ctx=23, majf=0, minf=9 00:35:37.388 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:37.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.388 issued rwts: total=2676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.388 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:37.388 filename0: (groupid=0, jobs=1): err= 0: pid=1574093: Thu Dec 5 21:28:43 2024 00:35:37.388 read: IOPS=264, BW=33.1MiB/s (34.7MB/s)(331MiB/10005msec) 00:35:37.388 slat (nsec): min=6353, max=36524, avg=11478.43, stdev=1804.77 00:35:37.388 clat (usec): min=5060, max=14542, avg=11316.77, stdev=800.45 00:35:37.388 lat (usec): min=5067, max=14579, avg=11328.25, stdev=800.47 00:35:37.388 clat percentiles (usec): 00:35:37.388 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10290], 20.00th=[10683], 00:35:37.388 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:35:37.388 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:35:37.388 | 99.00th=[13304], 99.50th=[13566], 99.90th=[14484], 99.95th=[14484], 00:35:37.388 | 99.99th=[14484] 00:35:37.388 bw ( KiB/s): min=33024, max=34816, per=32.09%, avg=33868.80, stdev=477.85, samples=20 00:35:37.388 iops : min= 258, max= 272, avg=264.60, stdev= 3.73, samples=20 00:35:37.388 lat (msec) : 10=3.70%, 20=96.30% 00:35:37.388 cpu : usr=94.51%, sys=5.17%, ctx=21, majf=0, minf=12 00:35:37.388 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:37.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.388 issued rwts: total=2649,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.388 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:37.388 00:35:37.388 Run status group 0 (all jobs): 00:35:37.388 READ: bw=103MiB/s (108MB/s), 33.1MiB/s-36.8MiB/s (34.7MB/s-38.6MB/s), io=1036MiB (1086MB), run=10005-10046msec 00:35:37.388 21:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:37.388 21:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:37.388 21:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:37.388 21:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:37.388 21:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:37.388 21:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:37.388 21:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.388 21:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:37.388 21:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.388 21:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:37.388 21:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.388 21:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:37.388 21:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.388 00:35:37.388 real 0m11.384s 00:35:37.388 user 0m35.317s 00:35:37.388 sys 0m1.923s 00:35:37.388 21:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:37.388 21:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:37.388 ************************************ 00:35:37.388 END TEST fio_dif_digest 00:35:37.388 ************************************ 00:35:37.388 21:28:44 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:37.388 21:28:44 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:37.388 21:28:44 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:37.388 21:28:44 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:37.388 21:28:44 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:37.388 21:28:44 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:37.388 21:28:44 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:37.388 21:28:44 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:37.388 rmmod nvme_tcp 00:35:37.388 rmmod nvme_fabrics 00:35:37.388 rmmod nvme_keyring 00:35:37.388 21:28:44 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:37.388 21:28:44 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:37.388 21:28:44 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:37.388 21:28:44 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1565485 ']' 00:35:37.388 21:28:44 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1565485 00:35:37.388 21:28:44 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1565485 ']' 00:35:37.388 21:28:44 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1565485 00:35:37.388 21:28:44 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:35:37.388 21:28:44 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:37.388 21:28:44 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1565485 00:35:37.388 21:28:44 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:37.388 21:28:44 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:37.388 21:28:44 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1565485' 00:35:37.388 killing process with pid 1565485 00:35:37.388 21:28:44 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1565485 00:35:37.388 21:28:44 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1565485 00:35:37.388 21:28:44 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:37.388 21:28:44 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:39.294 Waiting for block devices as requested 00:35:39.294 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:39.294 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:39.294 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:39.554 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:39.554 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:39.554 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:39.554 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:39.813 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:39.813 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:39.813 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:40.072 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:40.072 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:40.072 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:40.331 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:40.331 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:40.331 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:40.331 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:40.589 21:28:48 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:40.589 21:28:48 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:40.589 21:28:48 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:40.589 21:28:48 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:35:40.589 21:28:48 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:40.589 21:28:48 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:35:40.589 21:28:48 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:40.589 21:28:48 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:40.589 21:28:48 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:40.589 21:28:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:40.589 21:28:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:42.491 21:28:50 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:42.491 00:35:42.491 real 1m14.888s 00:35:42.491 user 7m11.453s 00:35:42.491 sys 0m20.770s 00:35:42.491 21:28:50 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:42.491 21:28:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:42.491 ************************************ 00:35:42.491 END TEST nvmf_dif 00:35:42.491 ************************************ 00:35:42.749 21:28:50 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:42.749 21:28:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:42.749 21:28:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:42.749 21:28:50 -- common/autotest_common.sh@10 -- # set +x 00:35:42.749 ************************************ 00:35:42.749 START TEST nvmf_abort_qd_sizes 00:35:42.749 ************************************ 00:35:42.749 21:28:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:42.749 * Looking for test storage... 00:35:42.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:42.749 21:28:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:42.749 21:28:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:35:42.749 21:28:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:42.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.750 --rc genhtml_branch_coverage=1 00:35:42.750 --rc genhtml_function_coverage=1 00:35:42.750 --rc genhtml_legend=1 00:35:42.750 --rc geninfo_all_blocks=1 00:35:42.750 --rc geninfo_unexecuted_blocks=1 00:35:42.750 00:35:42.750 ' 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:42.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.750 --rc genhtml_branch_coverage=1 00:35:42.750 --rc genhtml_function_coverage=1 00:35:42.750 --rc genhtml_legend=1 00:35:42.750 --rc geninfo_all_blocks=1 00:35:42.750 --rc geninfo_unexecuted_blocks=1 00:35:42.750 00:35:42.750 ' 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:42.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.750 --rc genhtml_branch_coverage=1 00:35:42.750 --rc genhtml_function_coverage=1 00:35:42.750 --rc genhtml_legend=1 00:35:42.750 --rc geninfo_all_blocks=1 00:35:42.750 --rc geninfo_unexecuted_blocks=1 00:35:42.750 00:35:42.750 ' 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:42.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.750 --rc genhtml_branch_coverage=1 00:35:42.750 --rc genhtml_function_coverage=1 00:35:42.750 --rc genhtml_legend=1 00:35:42.750 --rc geninfo_all_blocks=1 00:35:42.750 --rc geninfo_unexecuted_blocks=1 00:35:42.750 00:35:42.750 ' 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:42.750 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:43.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:43.009 21:28:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:49.590 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:49.590 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:49.591 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:49.591 Found net devices under 0000:86:00.0: cvl_0_0 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:49.591 Found net devices under 0000:86:00.1: cvl_0_1 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:49.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:49.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:35:49.591 00:35:49.591 --- 10.0.0.2 ping statistics --- 00:35:49.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:49.591 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:49.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:49.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:35:49.591 00:35:49.591 --- 10.0.0.1 ping statistics --- 00:35:49.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:49.591 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:49.591 21:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:51.498 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:51.498 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:51.498 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:51.498 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:51.498 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:51.498 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:51.498 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:51.498 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:51.498 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:51.498 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:51.498 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:51.498 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:51.498 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:51.757 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:51.757 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:51.757 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:53.135 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1581892 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1581892 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1581892 ']' 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:53.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:53.135 21:29:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:53.135 [2024-12-05 21:29:01.130893] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:35:53.135 [2024-12-05 21:29:01.130939] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:53.135 [2024-12-05 21:29:01.211988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:53.393 [2024-12-05 21:29:01.259533] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:53.393 [2024-12-05 21:29:01.259568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:53.393 [2024-12-05 21:29:01.259575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:53.393 [2024-12-05 21:29:01.259581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:53.393 [2024-12-05 21:29:01.259586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:53.393 [2024-12-05 21:29:01.263389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:53.393 [2024-12-05 21:29:01.263433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:53.393 [2024-12-05 21:29:01.263539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:53.393 [2024-12-05 21:29:01.263541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:53.957 21:29:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:53.957 21:29:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:53.957 21:29:01 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:53.957 21:29:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:53.957 21:29:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:53.957 21:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:53.957 21:29:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:53.958 21:29:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:53.958 ************************************ 00:35:53.958 START TEST spdk_target_abort 00:35:53.958 ************************************ 00:35:53.958 21:29:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:53.958 21:29:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:53.958 21:29:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:53.958 21:29:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.958 21:29:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:57.233 spdk_targetn1 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:57.233 [2024-12-05 21:29:04.880624] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:57.233 [2024-12-05 21:29:04.920932] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:57.233 21:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:00.505 Initializing NVMe Controllers 00:36:00.505 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:00.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:00.506 Initialization complete. Launching workers. 00:36:00.506 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17292, failed: 0 00:36:00.506 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1413, failed to submit 15879 00:36:00.506 success 777, unsuccessful 636, failed 0 00:36:00.506 21:29:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:00.506 21:29:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:03.784 Initializing NVMe Controllers 00:36:03.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:03.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:03.784 Initialization complete. Launching workers. 00:36:03.784 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8542, failed: 0 00:36:03.784 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1268, failed to submit 7274 00:36:03.784 success 293, unsuccessful 975, failed 0 00:36:03.784 21:29:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:03.784 21:29:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:07.056 Initializing NVMe Controllers 00:36:07.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:07.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:07.056 Initialization complete. Launching workers. 00:36:07.056 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38376, failed: 0 00:36:07.056 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2805, failed to submit 35571 00:36:07.056 success 623, unsuccessful 2182, failed 0 00:36:07.056 21:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:07.056 21:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.056 21:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.056 21:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.056 21:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:07.056 21:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.056 21:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:08.946 21:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.946 21:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1581892 00:36:08.946 21:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1581892 ']' 00:36:08.946 21:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1581892 00:36:08.946 21:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:36:08.946 21:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:08.946 21:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1581892 00:36:08.946 21:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:08.946 21:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:08.946 21:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1581892' 00:36:08.946 killing process with pid 1581892 00:36:08.946 21:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1581892 00:36:08.946 21:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1581892 00:36:08.946 00:36:08.946 real 0m14.904s 00:36:08.946 user 0m59.310s 00:36:08.946 sys 0m2.673s 00:36:08.946 21:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:08.946 21:29:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:08.946 ************************************ 00:36:08.946 END TEST spdk_target_abort 00:36:08.946 ************************************ 00:36:08.946 21:29:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:08.946 21:29:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:08.946 21:29:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:08.946 21:29:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:08.946 ************************************ 00:36:08.946 START TEST kernel_target_abort 00:36:08.946 ************************************ 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:08.946 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:09.206 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:09.206 21:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:11.825 Waiting for block devices as requested 00:36:11.825 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:11.825 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:12.090 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:12.090 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:12.090 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:12.349 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:12.349 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:12.349 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:12.349 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:12.607 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:12.607 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:12.607 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:12.865 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:12.865 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:12.865 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:13.124 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:13.124 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:13.124 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:13.124 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:13.124 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:13.124 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:13.124 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:13.124 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:13.124 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:13.124 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:13.124 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:13.124 No valid GPT data, bailing 00:36:13.124 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:13.124 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:13.383 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:13.383 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:13.383 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:13.383 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:13.383 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:13.383 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:13.383 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:13.383 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:36:13.383 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:13.383 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:36:13.383 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:13.383 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:36:13.383 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:36:13.383 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:36:13.383 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:13.383 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:13.383 00:36:13.383 Discovery Log Number of Records 2, Generation counter 2 00:36:13.383 =====Discovery Log Entry 0====== 00:36:13.383 trtype: tcp 00:36:13.383 adrfam: ipv4 00:36:13.383 subtype: current discovery subsystem 00:36:13.383 treq: not specified, sq flow control disable supported 00:36:13.383 portid: 1 00:36:13.383 trsvcid: 4420 00:36:13.383 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:13.383 traddr: 10.0.0.1 00:36:13.383 eflags: none 00:36:13.383 sectype: none 00:36:13.383 =====Discovery Log Entry 1====== 00:36:13.383 trtype: tcp 00:36:13.383 adrfam: ipv4 00:36:13.383 subtype: nvme subsystem 00:36:13.383 treq: not specified, sq flow control disable supported 00:36:13.383 portid: 1 00:36:13.383 trsvcid: 4420 00:36:13.384 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:13.384 traddr: 10.0.0.1 00:36:13.384 eflags: none 00:36:13.384 sectype: none 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:13.384 21:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:16.674 Initializing NVMe Controllers 00:36:16.674 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:16.674 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:16.674 Initialization complete. Launching workers. 00:36:16.674 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94587, failed: 0 00:36:16.674 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 94587, failed to submit 0 00:36:16.674 success 0, unsuccessful 94587, failed 0 00:36:16.674 21:29:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:16.674 21:29:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:19.959 Initializing NVMe Controllers 00:36:19.959 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:19.960 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:19.960 Initialization complete. Launching workers. 00:36:19.960 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 149499, failed: 0 00:36:19.960 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37734, failed to submit 111765 00:36:19.960 success 0, unsuccessful 37734, failed 0 00:36:19.960 21:29:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:19.960 21:29:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:23.244 Initializing NVMe Controllers 00:36:23.244 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:23.244 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:23.244 Initialization complete. Launching workers. 00:36:23.244 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 138878, failed: 0 00:36:23.244 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34778, failed to submit 104100 00:36:23.244 success 0, unsuccessful 34778, failed 0 00:36:23.244 21:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:23.244 21:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:23.244 21:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:36:23.244 21:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:23.244 21:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:23.244 21:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:23.244 21:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:23.244 21:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:23.244 21:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:23.245 21:29:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:25.776 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:25.776 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:25.776 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:25.776 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:25.776 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:25.776 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:25.776 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:25.776 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:25.776 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:25.776 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:25.776 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:25.776 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:25.776 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:25.776 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:25.776 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:25.776 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:27.157 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:27.157 00:36:27.157 real 0m18.204s 00:36:27.157 user 0m9.133s 00:36:27.157 sys 0m5.081s 00:36:27.157 21:29:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:27.157 21:29:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.157 ************************************ 00:36:27.157 END TEST kernel_target_abort 00:36:27.157 ************************************ 00:36:27.416 21:29:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:27.416 21:29:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:27.416 21:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:27.416 21:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:36:27.417 21:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:27.417 21:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:36:27.417 21:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:27.417 21:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:27.417 rmmod nvme_tcp 00:36:27.417 rmmod nvme_fabrics 00:36:27.417 rmmod nvme_keyring 00:36:27.417 21:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:27.417 21:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:36:27.417 21:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:36:27.417 21:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1581892 ']' 00:36:27.417 21:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1581892 00:36:27.417 21:29:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1581892 ']' 00:36:27.417 21:29:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1581892 00:36:27.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1581892) - No such process 00:36:27.417 21:29:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1581892 is not found' 00:36:27.417 Process with pid 1581892 is not found 00:36:27.417 21:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:27.417 21:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:29.954 Waiting for block devices as requested 00:36:30.213 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:30.213 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:30.213 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:30.472 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:30.472 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:30.472 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:30.731 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:30.731 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:30.731 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:30.731 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:30.991 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:30.991 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:30.991 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:31.249 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:31.249 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:31.249 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:31.249 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:31.508 21:29:39 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:31.508 21:29:39 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:31.509 21:29:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:31.509 21:29:39 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:36:31.509 21:29:39 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:31.509 21:29:39 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:36:31.509 21:29:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:31.509 21:29:39 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:31.509 21:29:39 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:31.509 21:29:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:31.509 21:29:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:34.046 21:29:41 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:34.046 00:36:34.046 real 0m50.874s 00:36:34.046 user 1m12.882s 00:36:34.046 sys 0m16.564s 00:36:34.046 21:29:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:34.046 21:29:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:34.046 ************************************ 00:36:34.046 END TEST nvmf_abort_qd_sizes 00:36:34.046 ************************************ 00:36:34.046 21:29:41 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:34.046 21:29:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:34.046 21:29:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:34.046 21:29:41 -- common/autotest_common.sh@10 -- # set +x 00:36:34.046 ************************************ 00:36:34.046 START TEST keyring_file 00:36:34.046 ************************************ 00:36:34.046 21:29:41 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:34.046 * Looking for test storage... 00:36:34.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:34.046 21:29:41 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:34.046 21:29:41 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:36:34.046 21:29:41 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:34.046 21:29:41 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:34.046 21:29:41 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:34.046 21:29:41 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:34.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.046 --rc genhtml_branch_coverage=1 00:36:34.046 --rc genhtml_function_coverage=1 00:36:34.046 --rc genhtml_legend=1 00:36:34.046 --rc geninfo_all_blocks=1 00:36:34.046 --rc geninfo_unexecuted_blocks=1 00:36:34.046 00:36:34.046 ' 00:36:34.046 21:29:41 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:34.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.046 --rc genhtml_branch_coverage=1 00:36:34.046 --rc genhtml_function_coverage=1 00:36:34.046 --rc genhtml_legend=1 00:36:34.046 --rc geninfo_all_blocks=1 00:36:34.046 --rc geninfo_unexecuted_blocks=1 00:36:34.046 00:36:34.046 ' 00:36:34.046 21:29:41 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:34.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.046 --rc genhtml_branch_coverage=1 00:36:34.046 --rc genhtml_function_coverage=1 00:36:34.046 --rc genhtml_legend=1 00:36:34.046 --rc geninfo_all_blocks=1 00:36:34.046 --rc geninfo_unexecuted_blocks=1 00:36:34.046 00:36:34.046 ' 00:36:34.046 21:29:41 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:34.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.046 --rc genhtml_branch_coverage=1 00:36:34.046 --rc genhtml_function_coverage=1 00:36:34.046 --rc genhtml_legend=1 00:36:34.046 --rc geninfo_all_blocks=1 00:36:34.046 --rc geninfo_unexecuted_blocks=1 00:36:34.046 00:36:34.046 ' 00:36:34.046 21:29:41 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:34.046 21:29:41 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:34.046 21:29:41 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:34.046 21:29:41 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:34.046 21:29:41 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:34.046 21:29:41 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:34.046 21:29:41 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:34.046 21:29:41 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:34.046 21:29:41 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:34.046 21:29:41 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:34.046 21:29:41 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:34.046 21:29:41 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:34.046 21:29:41 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:34.046 21:29:41 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:34.046 21:29:41 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:34.046 21:29:41 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:34.046 21:29:41 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:34.046 21:29:41 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:34.046 21:29:41 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:34.046 21:29:41 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:34.046 21:29:41 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:34.046 21:29:41 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.046 21:29:41 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.046 21:29:41 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.046 21:29:41 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:34.047 21:29:41 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:34.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:34.047 21:29:41 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:34.047 21:29:41 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:34.047 21:29:41 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:34.047 21:29:41 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:34.047 21:29:41 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:34.047 21:29:41 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8XmNCPZ9FU 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8XmNCPZ9FU 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8XmNCPZ9FU 00:36:34.047 21:29:41 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.8XmNCPZ9FU 00:36:34.047 21:29:41 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.RKmbMLIrmA 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:34.047 21:29:41 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.RKmbMLIrmA 00:36:34.047 21:29:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.RKmbMLIrmA 00:36:34.047 21:29:41 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.RKmbMLIrmA 00:36:34.047 21:29:41 keyring_file -- keyring/file.sh@30 -- # tgtpid=1590925 00:36:34.047 21:29:41 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:34.047 21:29:41 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1590925 00:36:34.047 21:29:41 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1590925 ']' 00:36:34.047 21:29:41 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:34.047 21:29:41 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:34.047 21:29:41 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:34.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:34.047 21:29:41 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:34.047 21:29:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:34.047 [2024-12-05 21:29:41.970587] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:36:34.047 [2024-12-05 21:29:41.970638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590925 ] 00:36:34.047 [2024-12-05 21:29:42.043347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.047 [2024-12-05 21:29:42.085604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:34.307 21:29:42 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:34.307 [2024-12-05 21:29:42.315217] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:34.307 null0 00:36:34.307 [2024-12-05 21:29:42.347260] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:34.307 [2024-12-05 21:29:42.347460] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.307 21:29:42 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:34.307 [2024-12-05 21:29:42.375324] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:34.307 request: 00:36:34.307 { 00:36:34.307 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:34.307 "secure_channel": false, 00:36:34.307 "listen_address": { 00:36:34.307 "trtype": "tcp", 00:36:34.307 "traddr": "127.0.0.1", 00:36:34.307 "trsvcid": "4420" 00:36:34.307 }, 00:36:34.307 "method": "nvmf_subsystem_add_listener", 00:36:34.307 "req_id": 1 00:36:34.307 } 00:36:34.307 Got JSON-RPC error response 00:36:34.307 response: 00:36:34.307 { 00:36:34.307 "code": -32602, 00:36:34.307 "message": "Invalid parameters" 00:36:34.307 } 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:34.307 21:29:42 keyring_file -- keyring/file.sh@47 -- # bperfpid=1590929 00:36:34.307 21:29:42 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:34.307 21:29:42 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1590929 /var/tmp/bperf.sock 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1590929 ']' 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:34.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:34.307 21:29:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:34.565 [2024-12-05 21:29:42.429311] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:36:34.565 [2024-12-05 21:29:42.429352] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590929 ] 00:36:34.565 [2024-12-05 21:29:42.499655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.565 [2024-12-05 21:29:42.541953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:34.565 21:29:42 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:34.565 21:29:42 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:34.565 21:29:42 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8XmNCPZ9FU 00:36:34.565 21:29:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8XmNCPZ9FU 00:36:34.822 21:29:42 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.RKmbMLIrmA 00:36:34.822 21:29:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.RKmbMLIrmA 00:36:35.081 21:29:43 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:35.081 21:29:43 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:35.081 21:29:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.081 21:29:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.081 21:29:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:35.339 21:29:43 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.8XmNCPZ9FU == \/\t\m\p\/\t\m\p\.\8\X\m\N\C\P\Z\9\F\U ]] 00:36:35.339 21:29:43 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:35.339 21:29:43 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:35.339 21:29:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.339 21:29:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:35.339 21:29:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.339 21:29:43 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.RKmbMLIrmA == \/\t\m\p\/\t\m\p\.\R\K\m\b\M\L\I\r\m\A ]] 00:36:35.339 21:29:43 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:35.339 21:29:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:35.339 21:29:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:35.339 21:29:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.339 21:29:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.339 21:29:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:35.597 21:29:43 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:35.597 21:29:43 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:35.597 21:29:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:35.597 21:29:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:35.597 21:29:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.597 21:29:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.597 21:29:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:35.854 21:29:43 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:35.854 21:29:43 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:35.854 21:29:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:36.112 [2024-12-05 21:29:43.977464] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:36.112 nvme0n1 00:36:36.112 21:29:44 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:36.112 21:29:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:36.112 21:29:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:36.112 21:29:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:36.112 21:29:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:36.112 21:29:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:36.370 21:29:44 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:36.370 21:29:44 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:36.370 21:29:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:36.370 21:29:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:36.370 21:29:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:36.370 21:29:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:36.370 21:29:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:36.370 21:29:44 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:36.370 21:29:44 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:36.627 Running I/O for 1 seconds... 00:36:37.559 19367.00 IOPS, 75.65 MiB/s 00:36:37.559 Latency(us) 00:36:37.559 [2024-12-05T20:29:45.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:37.560 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:37.560 nvme0n1 : 1.00 19409.73 75.82 0.00 0.00 6582.16 4306.65 18474.91 00:36:37.560 [2024-12-05T20:29:45.668Z] =================================================================================================================== 00:36:37.560 [2024-12-05T20:29:45.668Z] Total : 19409.73 75.82 0.00 0.00 6582.16 4306.65 18474.91 00:36:37.560 { 00:36:37.560 "results": [ 00:36:37.560 { 00:36:37.560 "job": "nvme0n1", 00:36:37.560 "core_mask": "0x2", 00:36:37.560 "workload": "randrw", 00:36:37.560 "percentage": 50, 00:36:37.560 "status": "finished", 00:36:37.560 "queue_depth": 128, 00:36:37.560 "io_size": 4096, 00:36:37.560 "runtime": 1.004393, 00:36:37.560 "iops": 19409.733042743228, 00:36:37.560 "mibps": 75.81926969821573, 00:36:37.560 "io_failed": 0, 00:36:37.560 "io_timeout": 0, 00:36:37.560 "avg_latency_us": 6582.160694292797, 00:36:37.560 "min_latency_us": 4306.651428571428, 00:36:37.560 "max_latency_us": 18474.910476190475 00:36:37.560 } 00:36:37.560 ], 00:36:37.560 "core_count": 1 00:36:37.560 } 00:36:37.560 21:29:45 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:37.560 21:29:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:37.817 21:29:45 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:37.817 21:29:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:37.817 21:29:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:37.817 21:29:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:37.817 21:29:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:37.817 21:29:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.074 21:29:45 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:38.074 21:29:45 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:38.074 21:29:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:38.074 21:29:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:38.074 21:29:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:38.074 21:29:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:38.074 21:29:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.074 21:29:46 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:38.074 21:29:46 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:38.074 21:29:46 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:38.074 21:29:46 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:38.074 21:29:46 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:38.074 21:29:46 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:38.074 21:29:46 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:38.074 21:29:46 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:38.074 21:29:46 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:38.074 21:29:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:38.331 [2024-12-05 21:29:46.340139] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:38.331 [2024-12-05 21:29:46.340855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadfe30 (107): Transport endpoint is not connected 00:36:38.331 [2024-12-05 21:29:46.341849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadfe30 (9): Bad file descriptor 00:36:38.331 [2024-12-05 21:29:46.342850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:38.331 [2024-12-05 21:29:46.342863] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:38.331 [2024-12-05 21:29:46.342871] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:38.331 [2024-12-05 21:29:46.342879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:38.331 request: 00:36:38.331 { 00:36:38.331 "name": "nvme0", 00:36:38.331 "trtype": "tcp", 00:36:38.331 "traddr": "127.0.0.1", 00:36:38.331 "adrfam": "ipv4", 00:36:38.331 "trsvcid": "4420", 00:36:38.331 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:38.331 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:38.331 "prchk_reftag": false, 00:36:38.331 "prchk_guard": false, 00:36:38.331 "hdgst": false, 00:36:38.331 "ddgst": false, 00:36:38.331 "psk": "key1", 00:36:38.331 "allow_unrecognized_csi": false, 00:36:38.331 "method": "bdev_nvme_attach_controller", 00:36:38.331 "req_id": 1 00:36:38.331 } 00:36:38.331 Got JSON-RPC error response 00:36:38.331 response: 00:36:38.331 { 00:36:38.331 "code": -5, 00:36:38.331 "message": "Input/output error" 00:36:38.331 } 00:36:38.331 21:29:46 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:38.331 21:29:46 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:38.331 21:29:46 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:38.331 21:29:46 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:38.331 21:29:46 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:38.331 21:29:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:38.331 21:29:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:38.331 21:29:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:38.331 21:29:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.331 21:29:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:38.589 21:29:46 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:38.589 21:29:46 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:38.589 21:29:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:38.589 21:29:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:38.589 21:29:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.589 21:29:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:38.589 21:29:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:38.846 21:29:46 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:38.846 21:29:46 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:38.846 21:29:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:39.104 21:29:46 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:39.104 21:29:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:39.104 21:29:47 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:39.104 21:29:47 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:39.104 21:29:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:39.362 21:29:47 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:39.362 21:29:47 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.8XmNCPZ9FU 00:36:39.362 21:29:47 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.8XmNCPZ9FU 00:36:39.362 21:29:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:39.362 21:29:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.8XmNCPZ9FU 00:36:39.362 21:29:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:39.362 21:29:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:39.362 21:29:47 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:39.362 21:29:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:39.362 21:29:47 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8XmNCPZ9FU 00:36:39.362 21:29:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8XmNCPZ9FU 00:36:39.620 [2024-12-05 21:29:47.552038] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.8XmNCPZ9FU': 0100660 00:36:39.620 [2024-12-05 21:29:47.552063] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:39.620 request: 00:36:39.620 { 00:36:39.620 "name": "key0", 00:36:39.620 "path": "/tmp/tmp.8XmNCPZ9FU", 00:36:39.620 "method": "keyring_file_add_key", 00:36:39.620 "req_id": 1 00:36:39.620 } 00:36:39.620 Got JSON-RPC error response 00:36:39.620 response: 00:36:39.620 { 00:36:39.620 "code": -1, 00:36:39.620 "message": "Operation not permitted" 00:36:39.620 } 00:36:39.620 21:29:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:39.620 21:29:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:39.620 21:29:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:39.620 21:29:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:39.620 21:29:47 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.8XmNCPZ9FU 00:36:39.620 21:29:47 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8XmNCPZ9FU 00:36:39.620 21:29:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8XmNCPZ9FU 00:36:39.879 21:29:47 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.8XmNCPZ9FU 00:36:39.879 21:29:47 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:39.879 21:29:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:39.879 21:29:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:39.879 21:29:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:39.879 21:29:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:39.879 21:29:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:39.879 21:29:47 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:39.879 21:29:47 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:39.879 21:29:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:39.879 21:29:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:39.879 21:29:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:39.879 21:29:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:39.879 21:29:47 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:39.879 21:29:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:39.879 21:29:47 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:39.879 21:29:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:40.138 [2024-12-05 21:29:48.153652] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.8XmNCPZ9FU': No such file or directory 00:36:40.138 [2024-12-05 21:29:48.153678] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:40.138 [2024-12-05 21:29:48.153694] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:40.138 [2024-12-05 21:29:48.153717] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:40.138 [2024-12-05 21:29:48.153730] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:40.138 [2024-12-05 21:29:48.153736] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:40.138 request: 00:36:40.138 { 00:36:40.138 "name": "nvme0", 00:36:40.138 "trtype": "tcp", 00:36:40.138 "traddr": "127.0.0.1", 00:36:40.138 "adrfam": "ipv4", 00:36:40.138 "trsvcid": "4420", 00:36:40.138 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:40.138 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:40.138 "prchk_reftag": false, 00:36:40.138 "prchk_guard": false, 00:36:40.138 "hdgst": false, 00:36:40.138 "ddgst": false, 00:36:40.138 "psk": "key0", 00:36:40.138 "allow_unrecognized_csi": false, 00:36:40.138 "method": "bdev_nvme_attach_controller", 00:36:40.138 "req_id": 1 00:36:40.138 } 00:36:40.138 Got JSON-RPC error response 00:36:40.138 response: 00:36:40.138 { 00:36:40.138 "code": -19, 00:36:40.138 "message": "No such device" 00:36:40.138 } 00:36:40.138 21:29:48 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:40.138 21:29:48 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:40.138 21:29:48 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:40.138 21:29:48 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:40.138 21:29:48 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:40.138 21:29:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:40.397 21:29:48 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:40.397 21:29:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:40.397 21:29:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:40.397 21:29:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:40.397 21:29:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:40.397 21:29:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:40.397 21:29:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CFtcUhW8Kg 00:36:40.397 21:29:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:40.397 21:29:48 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:40.397 21:29:48 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:40.397 21:29:48 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:40.397 21:29:48 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:40.398 21:29:48 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:40.398 21:29:48 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:40.398 21:29:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CFtcUhW8Kg 00:36:40.398 21:29:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CFtcUhW8Kg 00:36:40.398 21:29:48 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.CFtcUhW8Kg 00:36:40.398 21:29:48 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CFtcUhW8Kg 00:36:40.398 21:29:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CFtcUhW8Kg 00:36:40.657 21:29:48 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:40.657 21:29:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:40.916 nvme0n1 00:36:40.916 21:29:48 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:40.916 21:29:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:40.916 21:29:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:40.916 21:29:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:40.916 21:29:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:40.916 21:29:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.175 21:29:49 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:41.175 21:29:49 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:41.175 21:29:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:41.175 21:29:49 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:41.175 21:29:49 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:41.175 21:29:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:41.175 21:29:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:41.175 21:29:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.434 21:29:49 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:41.434 21:29:49 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:41.434 21:29:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:41.434 21:29:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:41.434 21:29:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:41.434 21:29:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:41.434 21:29:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.693 21:29:49 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:41.693 21:29:49 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:41.693 21:29:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:41.952 21:29:49 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:41.952 21:29:49 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:41.952 21:29:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.952 21:29:50 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:41.952 21:29:50 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CFtcUhW8Kg 00:36:41.952 21:29:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CFtcUhW8Kg 00:36:42.212 21:29:50 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.RKmbMLIrmA 00:36:42.212 21:29:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.RKmbMLIrmA 00:36:42.471 21:29:50 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:42.471 21:29:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:42.730 nvme0n1 00:36:42.730 21:29:50 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:42.730 21:29:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:42.990 21:29:50 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:42.990 "subsystems": [ 00:36:42.990 { 00:36:42.990 "subsystem": "keyring", 00:36:42.990 "config": [ 00:36:42.990 { 00:36:42.990 "method": "keyring_file_add_key", 00:36:42.990 "params": { 00:36:42.990 "name": "key0", 00:36:42.990 "path": "/tmp/tmp.CFtcUhW8Kg" 00:36:42.990 } 00:36:42.990 }, 00:36:42.990 { 00:36:42.990 "method": "keyring_file_add_key", 00:36:42.990 "params": { 00:36:42.990 "name": "key1", 00:36:42.990 "path": "/tmp/tmp.RKmbMLIrmA" 00:36:42.990 } 00:36:42.990 } 00:36:42.990 ] 00:36:42.990 }, 00:36:42.990 { 00:36:42.990 "subsystem": "iobuf", 00:36:42.990 "config": [ 00:36:42.990 { 00:36:42.990 "method": "iobuf_set_options", 00:36:42.990 "params": { 00:36:42.990 "small_pool_count": 8192, 00:36:42.990 "large_pool_count": 1024, 00:36:42.990 "small_bufsize": 8192, 00:36:42.990 "large_bufsize": 135168, 00:36:42.990 "enable_numa": false 00:36:42.990 } 00:36:42.990 } 00:36:42.990 ] 00:36:42.990 }, 00:36:42.990 { 00:36:42.990 "subsystem": "sock", 00:36:42.990 "config": [ 00:36:42.990 { 00:36:42.990 "method": "sock_set_default_impl", 00:36:42.990 "params": { 00:36:42.990 "impl_name": "posix" 00:36:42.990 } 00:36:42.990 }, 00:36:42.990 { 00:36:42.990 "method": "sock_impl_set_options", 00:36:42.990 "params": { 00:36:42.990 "impl_name": "ssl", 00:36:42.990 "recv_buf_size": 4096, 00:36:42.990 "send_buf_size": 4096, 00:36:42.990 "enable_recv_pipe": true, 00:36:42.990 "enable_quickack": false, 00:36:42.990 "enable_placement_id": 0, 00:36:42.990 "enable_zerocopy_send_server": true, 00:36:42.990 "enable_zerocopy_send_client": false, 00:36:42.990 "zerocopy_threshold": 0, 00:36:42.990 "tls_version": 0, 00:36:42.990 "enable_ktls": false 00:36:42.990 } 00:36:42.990 }, 00:36:42.990 { 00:36:42.990 "method": "sock_impl_set_options", 00:36:42.990 "params": { 00:36:42.990 "impl_name": "posix", 00:36:42.990 "recv_buf_size": 2097152, 00:36:42.990 "send_buf_size": 2097152, 00:36:42.990 "enable_recv_pipe": true, 00:36:42.990 "enable_quickack": false, 00:36:42.990 "enable_placement_id": 0, 00:36:42.990 "enable_zerocopy_send_server": true, 00:36:42.990 "enable_zerocopy_send_client": false, 00:36:42.990 "zerocopy_threshold": 0, 00:36:42.990 "tls_version": 0, 00:36:42.990 "enable_ktls": false 00:36:42.990 } 00:36:42.990 } 00:36:42.990 ] 00:36:42.990 }, 00:36:42.990 { 00:36:42.990 "subsystem": "vmd", 00:36:42.990 "config": [] 00:36:42.990 }, 00:36:42.990 { 00:36:42.990 "subsystem": "accel", 00:36:42.990 "config": [ 00:36:42.990 { 00:36:42.990 "method": "accel_set_options", 00:36:42.990 "params": { 00:36:42.990 "small_cache_size": 128, 00:36:42.990 "large_cache_size": 16, 00:36:42.990 "task_count": 2048, 00:36:42.990 "sequence_count": 2048, 00:36:42.990 "buf_count": 2048 00:36:42.990 } 00:36:42.990 } 00:36:42.990 ] 00:36:42.990 }, 00:36:42.990 { 00:36:42.990 "subsystem": "bdev", 00:36:42.990 "config": [ 00:36:42.990 { 00:36:42.990 "method": "bdev_set_options", 00:36:42.990 "params": { 00:36:42.990 "bdev_io_pool_size": 65535, 00:36:42.990 "bdev_io_cache_size": 256, 00:36:42.990 "bdev_auto_examine": true, 00:36:42.990 "iobuf_small_cache_size": 128, 00:36:42.990 "iobuf_large_cache_size": 16 00:36:42.990 } 00:36:42.990 }, 00:36:42.990 { 00:36:42.990 "method": "bdev_raid_set_options", 00:36:42.990 "params": { 00:36:42.990 "process_window_size_kb": 1024, 00:36:42.990 "process_max_bandwidth_mb_sec": 0 00:36:42.990 } 00:36:42.990 }, 00:36:42.990 { 00:36:42.990 "method": "bdev_iscsi_set_options", 00:36:42.990 "params": { 00:36:42.990 "timeout_sec": 30 00:36:42.990 } 00:36:42.990 }, 00:36:42.990 { 00:36:42.990 "method": "bdev_nvme_set_options", 00:36:42.990 "params": { 00:36:42.990 "action_on_timeout": "none", 00:36:42.990 "timeout_us": 0, 00:36:42.990 "timeout_admin_us": 0, 00:36:42.990 "keep_alive_timeout_ms": 10000, 00:36:42.990 "arbitration_burst": 0, 00:36:42.990 "low_priority_weight": 0, 00:36:42.990 "medium_priority_weight": 0, 00:36:42.990 "high_priority_weight": 0, 00:36:42.990 "nvme_adminq_poll_period_us": 10000, 00:36:42.990 "nvme_ioq_poll_period_us": 0, 00:36:42.990 "io_queue_requests": 512, 00:36:42.990 "delay_cmd_submit": true, 00:36:42.990 "transport_retry_count": 4, 00:36:42.990 "bdev_retry_count": 3, 00:36:42.990 "transport_ack_timeout": 0, 00:36:42.990 "ctrlr_loss_timeout_sec": 0, 00:36:42.990 "reconnect_delay_sec": 0, 00:36:42.990 "fast_io_fail_timeout_sec": 0, 00:36:42.990 "disable_auto_failback": false, 00:36:42.990 "generate_uuids": false, 00:36:42.990 "transport_tos": 0, 00:36:42.990 "nvme_error_stat": false, 00:36:42.990 "rdma_srq_size": 0, 00:36:42.990 "io_path_stat": false, 00:36:42.990 "allow_accel_sequence": false, 00:36:42.990 "rdma_max_cq_size": 0, 00:36:42.990 "rdma_cm_event_timeout_ms": 0, 00:36:42.990 "dhchap_digests": [ 00:36:42.990 "sha256", 00:36:42.990 "sha384", 00:36:42.990 "sha512" 00:36:42.990 ], 00:36:42.990 "dhchap_dhgroups": [ 00:36:42.990 "null", 00:36:42.990 "ffdhe2048", 00:36:42.990 "ffdhe3072", 00:36:42.990 "ffdhe4096", 00:36:42.990 "ffdhe6144", 00:36:42.990 "ffdhe8192" 00:36:42.990 ] 00:36:42.990 } 00:36:42.990 }, 00:36:42.990 { 00:36:42.990 "method": "bdev_nvme_attach_controller", 00:36:42.990 "params": { 00:36:42.990 "name": "nvme0", 00:36:42.990 "trtype": "TCP", 00:36:42.990 "adrfam": "IPv4", 00:36:42.990 "traddr": "127.0.0.1", 00:36:42.990 "trsvcid": "4420", 00:36:42.990 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:42.990 "prchk_reftag": false, 00:36:42.990 "prchk_guard": false, 00:36:42.990 "ctrlr_loss_timeout_sec": 0, 00:36:42.990 "reconnect_delay_sec": 0, 00:36:42.990 "fast_io_fail_timeout_sec": 0, 00:36:42.990 "psk": "key0", 00:36:42.990 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:42.990 "hdgst": false, 00:36:42.990 "ddgst": false, 00:36:42.990 "multipath": "multipath" 00:36:42.990 } 00:36:42.990 }, 00:36:42.990 { 00:36:42.991 "method": "bdev_nvme_set_hotplug", 00:36:42.991 "params": { 00:36:42.991 "period_us": 100000, 00:36:42.991 "enable": false 00:36:42.991 } 00:36:42.991 }, 00:36:42.991 { 00:36:42.991 "method": "bdev_wait_for_examine" 00:36:42.991 } 00:36:42.991 ] 00:36:42.991 }, 00:36:42.991 { 00:36:42.991 "subsystem": "nbd", 00:36:42.991 "config": [] 00:36:42.991 } 00:36:42.991 ] 00:36:42.991 }' 00:36:42.991 21:29:50 keyring_file -- keyring/file.sh@115 -- # killprocess 1590929 00:36:42.991 21:29:50 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1590929 ']' 00:36:42.991 21:29:50 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1590929 00:36:42.991 21:29:50 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:42.991 21:29:50 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:42.991 21:29:50 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1590929 00:36:42.991 21:29:50 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:42.991 21:29:50 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:42.991 21:29:50 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1590929' 00:36:42.991 killing process with pid 1590929 00:36:42.991 21:29:50 keyring_file -- common/autotest_common.sh@973 -- # kill 1590929 00:36:42.991 Received shutdown signal, test time was about 1.000000 seconds 00:36:42.991 00:36:42.991 Latency(us) 00:36:42.991 [2024-12-05T20:29:51.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:42.991 [2024-12-05T20:29:51.099Z] =================================================================================================================== 00:36:42.991 [2024-12-05T20:29:51.099Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:42.991 21:29:50 keyring_file -- common/autotest_common.sh@978 -- # wait 1590929 00:36:43.249 21:29:51 keyring_file -- keyring/file.sh@118 -- # bperfpid=1592447 00:36:43.249 21:29:51 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1592447 /var/tmp/bperf.sock 00:36:43.249 21:29:51 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:43.249 21:29:51 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1592447 ']' 00:36:43.249 21:29:51 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:43.249 21:29:51 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:43.249 21:29:51 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:43.249 "subsystems": [ 00:36:43.249 { 00:36:43.249 "subsystem": "keyring", 00:36:43.249 "config": [ 00:36:43.249 { 00:36:43.249 "method": "keyring_file_add_key", 00:36:43.249 "params": { 00:36:43.249 "name": "key0", 00:36:43.249 "path": "/tmp/tmp.CFtcUhW8Kg" 00:36:43.249 } 00:36:43.249 }, 00:36:43.249 { 00:36:43.249 "method": "keyring_file_add_key", 00:36:43.249 "params": { 00:36:43.249 "name": "key1", 00:36:43.249 "path": "/tmp/tmp.RKmbMLIrmA" 00:36:43.249 } 00:36:43.249 } 00:36:43.249 ] 00:36:43.249 }, 00:36:43.249 { 00:36:43.249 "subsystem": "iobuf", 00:36:43.249 "config": [ 00:36:43.249 { 00:36:43.249 "method": "iobuf_set_options", 00:36:43.249 "params": { 00:36:43.249 "small_pool_count": 8192, 00:36:43.249 "large_pool_count": 1024, 00:36:43.249 "small_bufsize": 8192, 00:36:43.249 "large_bufsize": 135168, 00:36:43.249 "enable_numa": false 00:36:43.249 } 00:36:43.249 } 00:36:43.249 ] 00:36:43.249 }, 00:36:43.249 { 00:36:43.249 "subsystem": "sock", 00:36:43.249 "config": [ 00:36:43.249 { 00:36:43.249 "method": "sock_set_default_impl", 00:36:43.249 "params": { 00:36:43.249 "impl_name": "posix" 00:36:43.249 } 00:36:43.250 }, 00:36:43.250 { 00:36:43.250 "method": "sock_impl_set_options", 00:36:43.250 "params": { 00:36:43.250 "impl_name": "ssl", 00:36:43.250 "recv_buf_size": 4096, 00:36:43.250 "send_buf_size": 4096, 00:36:43.250 "enable_recv_pipe": true, 00:36:43.250 "enable_quickack": false, 00:36:43.250 "enable_placement_id": 0, 00:36:43.250 "enable_zerocopy_send_server": true, 00:36:43.250 "enable_zerocopy_send_client": false, 00:36:43.250 "zerocopy_threshold": 0, 00:36:43.250 "tls_version": 0, 00:36:43.250 "enable_ktls": false 00:36:43.250 } 00:36:43.250 }, 00:36:43.250 { 00:36:43.250 "method": "sock_impl_set_options", 00:36:43.250 "params": { 00:36:43.250 "impl_name": "posix", 00:36:43.250 "recv_buf_size": 2097152, 00:36:43.250 "send_buf_size": 2097152, 00:36:43.250 "enable_recv_pipe": true, 00:36:43.250 "enable_quickack": false, 00:36:43.250 "enable_placement_id": 0, 00:36:43.250 "enable_zerocopy_send_server": true, 00:36:43.250 "enable_zerocopy_send_client": false, 00:36:43.250 "zerocopy_threshold": 0, 00:36:43.250 "tls_version": 0, 00:36:43.250 "enable_ktls": false 00:36:43.250 } 00:36:43.250 } 00:36:43.250 ] 00:36:43.250 }, 00:36:43.250 { 00:36:43.250 "subsystem": "vmd", 00:36:43.250 "config": [] 00:36:43.250 }, 00:36:43.250 { 00:36:43.250 "subsystem": "accel", 00:36:43.250 "config": [ 00:36:43.250 { 00:36:43.250 "method": "accel_set_options", 00:36:43.250 "params": { 00:36:43.250 "small_cache_size": 128, 00:36:43.250 "large_cache_size": 16, 00:36:43.250 "task_count": 2048, 00:36:43.250 "sequence_count": 2048, 00:36:43.250 "buf_count": 2048 00:36:43.250 } 00:36:43.250 } 00:36:43.250 ] 00:36:43.250 }, 00:36:43.250 { 00:36:43.250 "subsystem": "bdev", 00:36:43.250 "config": [ 00:36:43.250 { 00:36:43.250 "method": "bdev_set_options", 00:36:43.250 "params": { 00:36:43.250 "bdev_io_pool_size": 65535, 00:36:43.250 "bdev_io_cache_size": 256, 00:36:43.250 "bdev_auto_examine": true, 00:36:43.250 "iobuf_small_cache_size": 128, 00:36:43.250 "iobuf_large_cache_size": 16 00:36:43.250 } 00:36:43.250 }, 00:36:43.250 { 00:36:43.250 "method": "bdev_raid_set_options", 00:36:43.250 "params": { 00:36:43.250 "process_window_size_kb": 1024, 00:36:43.250 "process_max_bandwidth_mb_sec": 0 00:36:43.250 } 00:36:43.250 }, 00:36:43.250 { 00:36:43.250 "method": "bdev_iscsi_set_options", 00:36:43.250 "params": { 00:36:43.250 "timeout_sec": 30 00:36:43.250 } 00:36:43.250 }, 00:36:43.250 { 00:36:43.250 "method": "bdev_nvme_set_options", 00:36:43.250 "params": { 00:36:43.250 "action_on_timeout": "none", 00:36:43.250 "timeout_us": 0, 00:36:43.250 "timeout_admin_us": 0, 00:36:43.250 "keep_alive_timeout_ms": 10000, 00:36:43.250 "arbitration_burst": 0, 00:36:43.250 "low_priority_weight": 0, 00:36:43.250 "medium_priority_weight": 0, 00:36:43.250 "high_priority_weight": 0, 00:36:43.250 "nvme_adminq_poll_period_us": 10000, 00:36:43.250 "nvme_ioq_poll_period_us": 0, 00:36:43.250 "io_queue_requests": 512, 00:36:43.250 "delay_cmd_submit": true, 00:36:43.250 "transport_retry_count": 4, 00:36:43.250 "bdev_retry_count": 3, 00:36:43.250 "transport_ack_timeout": 0, 00:36:43.250 "ctrlr_loss_timeout_sec": 0, 00:36:43.250 "reconnect_delay_sec": 0, 00:36:43.250 "fast_io_fail_timeout_sec": 0, 00:36:43.250 "disable_auto_failback": false, 00:36:43.250 "generate_uuids": false, 00:36:43.250 "transport_tos": 0, 00:36:43.250 "nvme_error_stat": false, 00:36:43.250 "rdma_srq_size": 0, 00:36:43.250 "io_path_stat": false, 00:36:43.250 "allow_accel_sequence": false, 00:36:43.250 "rdma_max_cq_size": 0, 00:36:43.250 "rdma_cm_event_timeout_ms": 0, 00:36:43.250 "dhchap_digests": [ 00:36:43.250 "sha256", 00:36:43.250 "sha384", 00:36:43.250 "sha512" 00:36:43.250 ], 00:36:43.250 "dhchap_dhgroups": [ 00:36:43.250 "null", 00:36:43.250 "ffdhe2048", 00:36:43.250 "ffdhe3072", 00:36:43.250 "ffdhe4096", 00:36:43.250 "ffdhe6144", 00:36:43.250 "ffdhe8192" 00:36:43.250 ] 00:36:43.250 } 00:36:43.250 }, 00:36:43.250 { 00:36:43.250 "method": "bdev_nvme_attach_controller", 00:36:43.250 "params": { 00:36:43.250 "name": "nvme0", 00:36:43.250 "trtype": "TCP", 00:36:43.250 "adrfam": "IPv4", 00:36:43.250 "traddr": "127.0.0.1", 00:36:43.250 "trsvcid": "4420", 00:36:43.250 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:43.250 "prchk_reftag": false, 00:36:43.250 "prchk_guard": false, 00:36:43.250 "ctrlr_loss_timeout_sec": 0, 00:36:43.250 "reconnect_delay_sec": 0, 00:36:43.250 "fast_io_fail_timeout_sec": 0, 00:36:43.250 "psk": "key0", 00:36:43.250 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:43.250 "hdgst": false, 00:36:43.250 "ddgst": false, 00:36:43.250 "multipath": "multipath" 00:36:43.250 } 00:36:43.250 }, 00:36:43.250 { 00:36:43.250 "method": "bdev_nvme_set_hotplug", 00:36:43.250 "params": { 00:36:43.250 "period_us": 100000, 00:36:43.250 "enable": false 00:36:43.250 } 00:36:43.250 }, 00:36:43.250 { 00:36:43.250 "method": "bdev_wait_for_examine" 00:36:43.250 } 00:36:43.250 ] 00:36:43.250 }, 00:36:43.250 { 00:36:43.250 "subsystem": "nbd", 00:36:43.250 "config": [] 00:36:43.250 } 00:36:43.250 ] 00:36:43.250 }' 00:36:43.250 21:29:51 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:43.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:43.250 21:29:51 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:43.250 21:29:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:43.250 [2024-12-05 21:29:51.182754] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:36:43.250 [2024-12-05 21:29:51.182802] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592447 ] 00:36:43.250 [2024-12-05 21:29:51.256025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.250 [2024-12-05 21:29:51.292965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:43.509 [2024-12-05 21:29:51.455417] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:44.075 21:29:52 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:44.075 21:29:52 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:44.075 21:29:52 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:44.075 21:29:52 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:44.075 21:29:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.334 21:29:52 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:44.334 21:29:52 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:44.334 21:29:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:44.334 21:29:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:44.334 21:29:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.334 21:29:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:44.334 21:29:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.334 21:29:52 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:44.334 21:29:52 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:44.334 21:29:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:44.334 21:29:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:44.334 21:29:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.334 21:29:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:44.334 21:29:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.594 21:29:52 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:44.594 21:29:52 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:44.594 21:29:52 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:44.594 21:29:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:44.853 21:29:52 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:44.853 21:29:52 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:44.853 21:29:52 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.CFtcUhW8Kg /tmp/tmp.RKmbMLIrmA 00:36:44.853 21:29:52 keyring_file -- keyring/file.sh@20 -- # killprocess 1592447 00:36:44.853 21:29:52 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1592447 ']' 00:36:44.853 21:29:52 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1592447 00:36:44.853 21:29:52 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:44.853 21:29:52 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:44.853 21:29:52 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1592447 00:36:44.853 21:29:52 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:44.853 21:29:52 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:44.853 21:29:52 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1592447' 00:36:44.853 killing process with pid 1592447 00:36:44.853 21:29:52 keyring_file -- common/autotest_common.sh@973 -- # kill 1592447 00:36:44.853 Received shutdown signal, test time was about 1.000000 seconds 00:36:44.853 00:36:44.853 Latency(us) 00:36:44.853 [2024-12-05T20:29:52.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.853 [2024-12-05T20:29:52.961Z] =================================================================================================================== 00:36:44.853 [2024-12-05T20:29:52.961Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:44.853 21:29:52 keyring_file -- common/autotest_common.sh@978 -- # wait 1592447 00:36:45.111 21:29:53 keyring_file -- keyring/file.sh@21 -- # killprocess 1590925 00:36:45.111 21:29:53 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1590925 ']' 00:36:45.111 21:29:53 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1590925 00:36:45.111 21:29:53 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:45.111 21:29:53 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:45.111 21:29:53 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1590925 00:36:45.111 21:29:53 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:45.111 21:29:53 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:45.111 21:29:53 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1590925' 00:36:45.111 killing process with pid 1590925 00:36:45.111 21:29:53 keyring_file -- common/autotest_common.sh@973 -- # kill 1590925 00:36:45.111 21:29:53 keyring_file -- common/autotest_common.sh@978 -- # wait 1590925 00:36:45.370 00:36:45.370 real 0m11.773s 00:36:45.370 user 0m29.278s 00:36:45.370 sys 0m2.665s 00:36:45.370 21:29:53 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:45.370 21:29:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:45.370 ************************************ 00:36:45.370 END TEST keyring_file 00:36:45.370 ************************************ 00:36:45.370 21:29:53 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:36:45.370 21:29:53 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:45.370 21:29:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:45.370 21:29:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:45.370 21:29:53 -- common/autotest_common.sh@10 -- # set +x 00:36:45.370 ************************************ 00:36:45.370 START TEST keyring_linux 00:36:45.370 ************************************ 00:36:45.370 21:29:53 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:45.370 Joined session keyring: 939935749 00:36:45.630 * Looking for test storage... 00:36:45.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:45.630 21:29:53 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:45.630 21:29:53 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:36:45.630 21:29:53 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:45.630 21:29:53 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:45.630 21:29:53 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:45.630 21:29:53 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:45.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.630 --rc genhtml_branch_coverage=1 00:36:45.630 --rc genhtml_function_coverage=1 00:36:45.630 --rc genhtml_legend=1 00:36:45.630 --rc geninfo_all_blocks=1 00:36:45.630 --rc geninfo_unexecuted_blocks=1 00:36:45.630 00:36:45.630 ' 00:36:45.630 21:29:53 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:45.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.630 --rc genhtml_branch_coverage=1 00:36:45.630 --rc genhtml_function_coverage=1 00:36:45.630 --rc genhtml_legend=1 00:36:45.630 --rc geninfo_all_blocks=1 00:36:45.630 --rc geninfo_unexecuted_blocks=1 00:36:45.630 00:36:45.630 ' 00:36:45.630 21:29:53 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:45.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.630 --rc genhtml_branch_coverage=1 00:36:45.630 --rc genhtml_function_coverage=1 00:36:45.630 --rc genhtml_legend=1 00:36:45.630 --rc geninfo_all_blocks=1 00:36:45.630 --rc geninfo_unexecuted_blocks=1 00:36:45.630 00:36:45.630 ' 00:36:45.630 21:29:53 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:45.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.630 --rc genhtml_branch_coverage=1 00:36:45.630 --rc genhtml_function_coverage=1 00:36:45.630 --rc genhtml_legend=1 00:36:45.630 --rc geninfo_all_blocks=1 00:36:45.630 --rc geninfo_unexecuted_blocks=1 00:36:45.630 00:36:45.630 ' 00:36:45.630 21:29:53 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:45.630 21:29:53 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:45.630 21:29:53 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:45.630 21:29:53 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:45.630 21:29:53 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:45.630 21:29:53 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:45.630 21:29:53 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:45.630 21:29:53 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:45.630 21:29:53 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:45.630 21:29:53 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:45.630 21:29:53 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:45.630 21:29:53 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:45.630 21:29:53 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:45.630 21:29:53 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:45.630 21:29:53 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:45.630 21:29:53 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:45.630 21:29:53 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:45.630 21:29:53 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:45.630 21:29:53 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:45.630 21:29:53 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:45.630 21:29:53 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:45.630 21:29:53 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.631 21:29:53 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.631 21:29:53 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.631 21:29:53 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:45.631 21:29:53 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:45.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:45.631 21:29:53 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:45.631 21:29:53 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:45.631 21:29:53 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:45.631 21:29:53 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:45.631 21:29:53 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:45.631 21:29:53 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:45.631 21:29:53 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:45.631 21:29:53 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:45.631 21:29:53 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:45.631 21:29:53 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:45.631 21:29:53 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:45.631 21:29:53 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:45.631 21:29:53 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:45.631 21:29:53 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:45.631 21:29:53 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:45.631 /tmp/:spdk-test:key0 00:36:45.631 21:29:53 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:45.631 21:29:53 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:45.631 21:29:53 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:45.631 21:29:53 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:45.631 21:29:53 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:45.631 21:29:53 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:45.631 21:29:53 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:45.631 21:29:53 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:45.890 21:29:53 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:45.890 21:29:53 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:45.890 /tmp/:spdk-test:key1 00:36:45.890 21:29:53 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1593000 00:36:45.890 21:29:53 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1593000 00:36:45.890 21:29:53 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:45.890 21:29:53 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1593000 ']' 00:36:45.890 21:29:53 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:45.890 21:29:53 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:45.890 21:29:53 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:45.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:45.890 21:29:53 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:45.890 21:29:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:45.890 [2024-12-05 21:29:53.793188] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:36:45.890 [2024-12-05 21:29:53.793239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593000 ] 00:36:45.890 [2024-12-05 21:29:53.866384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:45.890 [2024-12-05 21:29:53.909741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:46.150 21:29:54 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:46.150 21:29:54 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:46.150 21:29:54 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:46.150 21:29:54 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.150 21:29:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:46.150 [2024-12-05 21:29:54.123420] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:46.150 null0 00:36:46.150 [2024-12-05 21:29:54.155470] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:46.150 [2024-12-05 21:29:54.155798] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:46.150 21:29:54 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.150 21:29:54 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:46.150 180920351 00:36:46.150 21:29:54 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:46.150 760574174 00:36:46.150 21:29:54 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1593005 00:36:46.150 21:29:54 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1593005 /var/tmp/bperf.sock 00:36:46.150 21:29:54 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:46.150 21:29:54 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1593005 ']' 00:36:46.150 21:29:54 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:46.150 21:29:54 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:46.150 21:29:54 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:46.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:46.150 21:29:54 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:46.150 21:29:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:46.150 [2024-12-05 21:29:54.226832] Starting SPDK v25.01-pre git sha1 2b8672176 / DPDK 24.03.0 initialization... 00:36:46.151 [2024-12-05 21:29:54.226871] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593005 ] 00:36:46.410 [2024-12-05 21:29:54.300409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:46.410 [2024-12-05 21:29:54.340523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:46.410 21:29:54 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:46.410 21:29:54 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:46.410 21:29:54 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:46.410 21:29:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:46.669 21:29:54 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:46.669 21:29:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:46.928 21:29:54 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:46.928 21:29:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:46.928 [2024-12-05 21:29:54.989294] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:47.187 nvme0n1 00:36:47.187 21:29:55 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:47.187 21:29:55 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:47.187 21:29:55 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:47.187 21:29:55 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:47.187 21:29:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:47.187 21:29:55 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:47.187 21:29:55 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:47.187 21:29:55 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:47.187 21:29:55 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:47.187 21:29:55 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:47.187 21:29:55 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:47.187 21:29:55 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:47.187 21:29:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:47.445 21:29:55 keyring_linux -- keyring/linux.sh@25 -- # sn=180920351 00:36:47.445 21:29:55 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:47.445 21:29:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:47.445 21:29:55 keyring_linux -- keyring/linux.sh@26 -- # [[ 180920351 == \1\8\0\9\2\0\3\5\1 ]] 00:36:47.445 21:29:55 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 180920351 00:36:47.445 21:29:55 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:47.445 21:29:55 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:47.702 Running I/O for 1 seconds... 00:36:48.636 21631.00 IOPS, 84.50 MiB/s 00:36:48.636 Latency(us) 00:36:48.636 [2024-12-05T20:29:56.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:48.636 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:48.636 nvme0n1 : 1.01 21632.13 84.50 0.00 0.00 5897.94 4868.39 13419.28 00:36:48.636 [2024-12-05T20:29:56.744Z] =================================================================================================================== 00:36:48.636 [2024-12-05T20:29:56.744Z] Total : 21632.13 84.50 0.00 0.00 5897.94 4868.39 13419.28 00:36:48.636 { 00:36:48.636 "results": [ 00:36:48.636 { 00:36:48.636 "job": "nvme0n1", 00:36:48.636 "core_mask": "0x2", 00:36:48.636 "workload": "randread", 00:36:48.636 "status": "finished", 00:36:48.636 "queue_depth": 128, 00:36:48.636 "io_size": 4096, 00:36:48.636 "runtime": 1.005865, 00:36:48.636 "iops": 21632.127571791443, 00:36:48.636 "mibps": 84.50049832731032, 00:36:48.636 "io_failed": 0, 00:36:48.636 "io_timeout": 0, 00:36:48.636 "avg_latency_us": 5897.940961047317, 00:36:48.636 "min_latency_us": 4868.388571428572, 00:36:48.636 "max_latency_us": 13419.27619047619 00:36:48.636 } 00:36:48.636 ], 00:36:48.636 "core_count": 1 00:36:48.636 } 00:36:48.636 21:29:56 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:48.636 21:29:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:48.895 21:29:56 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:48.895 21:29:56 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:48.895 21:29:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:48.895 21:29:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:48.895 21:29:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:48.895 21:29:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.895 21:29:56 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:48.895 21:29:56 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:48.895 21:29:56 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:48.895 21:29:56 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:48.895 21:29:56 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:36:48.895 21:29:56 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:48.895 21:29:56 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:48.895 21:29:56 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:48.895 21:29:56 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:48.895 21:29:56 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:48.895 21:29:56 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:48.895 21:29:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:49.153 [2024-12-05 21:29:57.154220] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:49.153 [2024-12-05 21:29:57.154895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25a3bc0 (107): Transport endpoint is not connected 00:36:49.153 [2024-12-05 21:29:57.155890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25a3bc0 (9): Bad file descriptor 00:36:49.153 [2024-12-05 21:29:57.156892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:49.153 [2024-12-05 21:29:57.156906] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:49.153 [2024-12-05 21:29:57.156913] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:49.153 [2024-12-05 21:29:57.156922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:49.153 request: 00:36:49.153 { 00:36:49.153 "name": "nvme0", 00:36:49.153 "trtype": "tcp", 00:36:49.153 "traddr": "127.0.0.1", 00:36:49.153 "adrfam": "ipv4", 00:36:49.153 "trsvcid": "4420", 00:36:49.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:49.153 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:49.153 "prchk_reftag": false, 00:36:49.153 "prchk_guard": false, 00:36:49.153 "hdgst": false, 00:36:49.153 "ddgst": false, 00:36:49.153 "psk": ":spdk-test:key1", 00:36:49.153 "allow_unrecognized_csi": false, 00:36:49.154 "method": "bdev_nvme_attach_controller", 00:36:49.154 "req_id": 1 00:36:49.154 } 00:36:49.154 Got JSON-RPC error response 00:36:49.154 response: 00:36:49.154 { 00:36:49.154 "code": -5, 00:36:49.154 "message": "Input/output error" 00:36:49.154 } 00:36:49.154 21:29:57 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:36:49.154 21:29:57 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:49.154 21:29:57 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:49.154 21:29:57 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:49.154 21:29:57 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:49.154 21:29:57 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:49.154 21:29:57 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:49.154 21:29:57 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:49.154 21:29:57 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:49.154 21:29:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:49.154 21:29:57 keyring_linux -- keyring/linux.sh@33 -- # sn=180920351 00:36:49.154 21:29:57 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 180920351 00:36:49.154 1 links removed 00:36:49.154 21:29:57 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:49.154 21:29:57 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:49.154 21:29:57 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:49.154 21:29:57 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:49.154 21:29:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:49.154 21:29:57 keyring_linux -- keyring/linux.sh@33 -- # sn=760574174 00:36:49.154 21:29:57 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 760574174 00:36:49.154 1 links removed 00:36:49.154 21:29:57 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1593005 00:36:49.154 21:29:57 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1593005 ']' 00:36:49.154 21:29:57 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1593005 00:36:49.154 21:29:57 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:49.154 21:29:57 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:49.154 21:29:57 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1593005 00:36:49.154 21:29:57 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:49.154 21:29:57 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:49.154 21:29:57 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1593005' 00:36:49.154 killing process with pid 1593005 00:36:49.154 21:29:57 keyring_linux -- common/autotest_common.sh@973 -- # kill 1593005 00:36:49.154 Received shutdown signal, test time was about 1.000000 seconds 00:36:49.154 00:36:49.154 Latency(us) 00:36:49.154 [2024-12-05T20:29:57.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.154 [2024-12-05T20:29:57.262Z] =================================================================================================================== 00:36:49.154 [2024-12-05T20:29:57.262Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:49.154 21:29:57 keyring_linux -- common/autotest_common.sh@978 -- # wait 1593005 00:36:49.412 21:29:57 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1593000 00:36:49.412 21:29:57 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1593000 ']' 00:36:49.412 21:29:57 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1593000 00:36:49.412 21:29:57 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:49.412 21:29:57 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:49.412 21:29:57 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1593000 00:36:49.412 21:29:57 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:49.412 21:29:57 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:49.412 21:29:57 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1593000' 00:36:49.412 killing process with pid 1593000 00:36:49.412 21:29:57 keyring_linux -- common/autotest_common.sh@973 -- # kill 1593000 00:36:49.412 21:29:57 keyring_linux -- common/autotest_common.sh@978 -- # wait 1593000 00:36:49.670 00:36:49.670 real 0m4.304s 00:36:49.670 user 0m8.092s 00:36:49.670 sys 0m1.434s 00:36:49.670 21:29:57 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:49.670 21:29:57 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:49.670 ************************************ 00:36:49.670 END TEST keyring_linux 00:36:49.670 ************************************ 00:36:49.929 21:29:57 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:49.929 21:29:57 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:49.929 21:29:57 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:49.929 21:29:57 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:49.929 21:29:57 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:49.929 21:29:57 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:49.929 21:29:57 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:49.929 21:29:57 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:49.929 21:29:57 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:49.929 21:29:57 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:49.929 21:29:57 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:49.929 21:29:57 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:49.929 21:29:57 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:49.929 21:29:57 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:49.929 21:29:57 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:49.929 21:29:57 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:49.929 21:29:57 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:49.929 21:29:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:49.929 21:29:57 -- common/autotest_common.sh@10 -- # set +x 00:36:49.929 21:29:57 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:49.929 21:29:57 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:49.929 21:29:57 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:49.929 21:29:57 -- common/autotest_common.sh@10 -- # set +x 00:36:55.212 INFO: APP EXITING 00:36:55.212 INFO: killing all VMs 00:36:55.212 INFO: killing vhost app 00:36:55.212 INFO: EXIT DONE 00:36:57.750 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:57.750 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:57.750 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:57.750 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:57.750 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:57.750 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:57.750 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:57.750 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:57.750 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:57.750 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:57.750 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:57.750 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:57.750 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:57.750 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:57.750 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:57.750 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:57.750 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:37:01.041 Cleaning 00:37:01.041 Removing: /var/run/dpdk/spdk0/config 00:37:01.041 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:01.041 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:01.041 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:01.041 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:01.041 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:01.041 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:01.041 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:01.041 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:01.041 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:01.041 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:01.041 Removing: /var/run/dpdk/spdk1/config 00:37:01.041 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:01.041 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:01.041 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:01.041 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:01.041 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:01.041 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:01.041 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:01.041 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:01.041 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:01.041 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:01.041 Removing: /var/run/dpdk/spdk2/config 00:37:01.041 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:01.041 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:01.041 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:01.041 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:01.041 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:01.041 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:01.041 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:01.041 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:01.041 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:01.041 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:01.041 Removing: /var/run/dpdk/spdk3/config 00:37:01.041 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:01.041 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:01.041 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:01.041 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:01.041 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:01.041 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:01.041 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:01.041 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:01.041 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:01.041 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:01.041 Removing: /var/run/dpdk/spdk4/config 00:37:01.041 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:01.041 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:01.041 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:01.041 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:01.041 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:01.041 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:01.041 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:01.041 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:01.041 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:01.041 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:01.041 Removing: /dev/shm/bdev_svc_trace.1 00:37:01.041 Removing: /dev/shm/nvmf_trace.0 00:37:01.041 Removing: /dev/shm/spdk_tgt_trace.pid1113287 00:37:01.041 Removing: /var/run/dpdk/spdk0 00:37:01.041 Removing: /var/run/dpdk/spdk1 00:37:01.041 Removing: /var/run/dpdk/spdk2 00:37:01.041 Removing: /var/run/dpdk/spdk3 00:37:01.041 Removing: /var/run/dpdk/spdk4 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1110926 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1111988 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1113287 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1113804 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1114709 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1114900 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1115871 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1115950 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1116240 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1117978 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1119256 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1119541 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1119828 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1120207 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1120438 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1120688 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1120937 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1121225 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1121969 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1124968 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1125226 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1125480 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1125489 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1125980 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1125988 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1126497 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1126503 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1126775 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1126988 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1127089 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1127258 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1127695 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1127868 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1128219 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1132087 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1136398 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1147180 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1147870 00:37:01.041 Removing: /var/run/dpdk/spdk_pid1152163 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1152623 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1156891 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1162779 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1165388 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1175673 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1184738 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1186368 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1187336 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1204888 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1208971 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1254482 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1259664 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1265424 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1271926 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1271928 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1272841 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1273748 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1274637 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1275131 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1275143 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1275373 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1275599 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1275605 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1276520 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1277323 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1278138 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1278820 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1278824 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1279061 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1280298 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1281282 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1289911 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1318417 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1323313 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1324919 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1326752 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1326788 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1327007 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1327228 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1327733 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1329364 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1330303 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1330678 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1332956 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1333443 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1334028 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1338224 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1343600 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1343601 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1343602 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1347393 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1355977 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1359785 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1366514 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1367816 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1369318 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1370698 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1375271 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1379731 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1383762 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1391238 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1391344 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1395855 00:37:01.042 Removing: /var/run/dpdk/spdk_pid1396088 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1396318 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1396746 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1396780 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1401361 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1401843 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1406426 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1408961 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1414611 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1420199 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1428984 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1436225 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1436229 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1455084 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1455728 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1456203 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1456674 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1457412 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1457925 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1458605 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1459178 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1463769 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1464042 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1469986 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1470180 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1475650 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1479745 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1489613 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1490107 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1494421 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1494809 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1498838 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1504690 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1507781 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1517967 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1526640 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1528237 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1529162 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1545284 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1549114 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1551930 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1560490 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1560496 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1565561 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1567501 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1569472 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1570613 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1572690 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1573768 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1582520 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1582987 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1583654 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1585947 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1586492 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1587039 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1590925 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1590929 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1592447 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1593000 00:37:01.301 Removing: /var/run/dpdk/spdk_pid1593005 00:37:01.301 Clean 00:37:01.559 21:30:09 -- common/autotest_common.sh@1453 -- # return 0 00:37:01.559 21:30:09 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:37:01.559 21:30:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:01.559 21:30:09 -- common/autotest_common.sh@10 -- # set +x 00:37:01.559 21:30:09 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:37:01.559 21:30:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:01.559 21:30:09 -- common/autotest_common.sh@10 -- # set +x 00:37:01.559 21:30:09 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:01.560 21:30:09 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:01.560 21:30:09 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:01.560 21:30:09 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:37:01.560 21:30:09 -- spdk/autotest.sh@398 -- # hostname 00:37:01.560 21:30:09 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:01.875 geninfo: WARNING: invalid characters removed from testname! 00:37:23.861 21:30:30 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:25.238 21:30:33 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:27.148 21:30:34 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:29.056 21:30:36 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:30.958 21:30:38 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:32.860 21:30:40 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:34.236 21:30:42 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:34.236 21:30:42 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:34.236 21:30:42 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:37:34.236 21:30:42 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:34.236 21:30:42 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:34.236 21:30:42 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:34.495 + [[ -n 1033661 ]] 00:37:34.495 + sudo kill 1033661 00:37:34.505 [Pipeline] } 00:37:34.521 [Pipeline] // stage 00:37:34.526 [Pipeline] } 00:37:34.541 [Pipeline] // timeout 00:37:34.546 [Pipeline] } 00:37:34.561 [Pipeline] // catchError 00:37:34.566 [Pipeline] } 00:37:34.582 [Pipeline] // wrap 00:37:34.589 [Pipeline] } 00:37:34.602 [Pipeline] // catchError 00:37:34.611 [Pipeline] stage 00:37:34.613 [Pipeline] { (Epilogue) 00:37:34.625 [Pipeline] catchError 00:37:34.627 [Pipeline] { 00:37:34.639 [Pipeline] echo 00:37:34.640 Cleanup processes 00:37:34.646 [Pipeline] sh 00:37:34.933 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:34.933 1604173 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:34.946 [Pipeline] sh 00:37:35.231 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:35.231 ++ grep -v 'sudo pgrep' 00:37:35.231 ++ awk '{print $1}' 00:37:35.231 + sudo kill -9 00:37:35.231 + true 00:37:35.243 [Pipeline] sh 00:37:35.527 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:47.738 [Pipeline] sh 00:37:48.022 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:48.022 Artifacts sizes are good 00:37:48.036 [Pipeline] archiveArtifacts 00:37:48.044 Archiving artifacts 00:37:48.174 [Pipeline] sh 00:37:48.458 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:48.473 [Pipeline] cleanWs 00:37:48.483 [WS-CLEANUP] Deleting project workspace... 00:37:48.484 [WS-CLEANUP] Deferred wipeout is used... 00:37:48.490 [WS-CLEANUP] done 00:37:48.492 [Pipeline] } 00:37:48.510 [Pipeline] // catchError 00:37:48.523 [Pipeline] sh 00:37:48.866 + logger -p user.info -t JENKINS-CI 00:37:48.876 [Pipeline] } 00:37:48.893 [Pipeline] // stage 00:37:48.899 [Pipeline] } 00:37:48.916 [Pipeline] // node 00:37:48.922 [Pipeline] End of Pipeline 00:37:48.961 Finished: SUCCESS